Michael Murtaugh 7 anni fa
commit 7563356d13

13
.gitignore esterno

@ -0,0 +1,13 @@
*.pyc
*~
drop/
drop.json
tiles/
lib/
venv/
fonts/
xpub.node.json
drop.node.json
archive.json
about.json
index.json

@ -0,0 +1,11 @@
The new study path experimental publishing is the merging of two stories: a singular one, and a slightly more general one.
The singular story is the one of the Media Design and Communication Master, which for a decade has established a critical approach to the ambiguous notion of media.
In the past years we have welcomed a wide range of practitioners from the culture field, visual and digital artists, graphic designers, musicians, performances artists, architects, fine press book makers, and computer programmers, to help them develop a practice that explore the social, technical, cultural and political dimensions of their work, and ultimately as we took the habit to say, to encourage them to <em>design their own media</em>.
<p>Such an approach has resulted in a rich variety of projects and writings, from <a href="http://fffff.at/pirates/">browser plugins to connect Amazon purchase button to Pirate Bay torrent links</a>, <a href="http://joak.nospace.at/rrtrn">chat system and audio feedback loops working with networks of tape reels</a>, <a href="http://www.birgitbachler.com/portfolio/portfolio/the-discrete-dialogue-network/">autonomous phone</a> and <a href="http://oyoana.com/leaveamessage">computer based voice mail networks</a>, <a href="http://epicpedia.org/">theatre scripts based on Wikipedia page histories</a>, <a href="http://p2pdesignstrategies.parcodiyellowstone.it/">peer-to-peer workflows for graphic designers</a>, <a href="http://stdin.fr/Works/BCC">generative artists book</a> and <a href="http://mhoogenboom.nl/?portfolio=epub-boem-paukeslag">concrete poetry epub</a>, <a href="">secret social networks</a> and file sharing hidden <a href="https://pzwiki.wdka.nl/mediadesign/Ddump:_recycling_in_the_digital_context">in the trash can</a> of your computer desktop, <a href="http://monoskop.org">wikis to publish precarious materials</a> and <a href="http://p-dpa.net/">weblogs of emerging forms of online artistic publishing</a> , and many other amazing things.</p>
The common point of these projects is they all look at particular issues, tensions, and conflicts relevant to the field of practice of their authors, and communicate concerns that are relevant to a much broader public. Why? Because they all offer a conversation about the cultural diversity, the systems, the networks, of humans and machines, that constitute our society.
This aspect of communication, sharing, informing, and thinking about how things are made public, and circulate in a public space is what link us today with this other, more general story, that is the one of publishing. Originally rooted in print media, the notion of publishing has in the last decades been both culturally diffused and appropriated well beyond its original domain of preference. It does not mean that publishing has lost its sharpness, in fact, this Cambrian explosion of new publishing potentials, has demonstrated how publishing has become central to a diversity of networked practices.
From app stores to art book fairs and zine shops, from darknets to sneakernets, from fansubs to on-demand services, and from tweeting to whistleblowing, the act of making things public, that is to say publishing, has became pivotal in an age infused with myriad media technologies.
What is more, the tension between the publishing heritage and novel forms of producing and sharing information has shown that old dichotomies such as analog versus digital, or local versus global, have grown increasingly irrelevant given their bond with hybrid media practices based on both old and new technologies, and their existence within mixed human and machine networks.
In sum, by experimental publishing we mean to engage with a broad set of intermingled and collaborative practices, both inherited and to be invented, so as to critically explore and actively engage with an ecosystem in which multi-layered interactions occur that are:
... social, technical, cultural and political;<br> involving actors both human and algorithmic;<br> and mediated by networks of distribution and communication of varying scales and visibility.
For this journey, we seek students motivated to challenge the protocols of publishing (in all its (im)possible forms) using play, fiction, and ambiguity as methods and strategies of production and presentation, in order to experiment on the threshold of what is possible, desirable, allowed, or disruptive, in this ever expanding field.

@ -0,0 +1,436 @@
<!DOCTYPE html>
<html lang="en">
<head>
<title></title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<script src="/lib/leaflet-1.0.1/leaflet.js"></script>
<link href="/lib/leaflet-1.0.1/leaflet.css" rel="stylesheet" type="text/css">
<link href="styles.css" rel="stylesheet" type="text/css">
</head>
<body>
<div id="frame" style="position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px">
<div id="map" style="width: 100%; height: 100%; background: black"></div>
<div id="text" style="position: absolute; left: 50px; top: 10px; width: auto; color: white">
</div>
</div>
<script>
(function() {
// warning CHANGES TO THIS CODE NEED TO BE ROLLED BACK INTO leaflet.py
var cell_layout, expandzoom, fourup, layoutxyz, render, split4, tiler, tiles_wrapper, zoom;
window.tiler = tiler = {};
tiler.tiles_wrapper = tiles_wrapper = function(path, ext) {
if (ext == null) { ext = "jpg"; }
var ret = {};
ret.get_tile_path = function(z, y, x) {
return path + ("/z"+z+"y"+y+"x"+x+"."+ext);
};
return ret;
};
tiler.zoom = zoom = function(tiles, caption, url, x, y, z, maxzoom) {
var c, i, k, kids, len, len1, node, r, ref, ref1;
if (x == null) {
x = 0;
}
if (y == null) {
y = 0;
}
if (z == null) {
z = 0;
}
if (maxzoom == null) {
maxzoom = 3;
}
node = {};
if (caption && x === 0 && y === 0) {
node['text'] = caption;
}
var lastc = Math.pow(2, z) - 1;
if (url && x === 0 && y === lastc) {
node['url'] = url
}
node['image'] = tiles.get_tile_path(z, y, x);
if (z < maxzoom) {
kids = [];
ref = [0, 1];
for (i = 0, len = ref.length; i < len; i++) {
r = ref[i];
ref1 = [0, 1];
for (k = 0, len1 = ref1.length; k < len1; k++) {
c = ref1[k];
kids.push(zoom(tiles, caption, url, (x * 2) + c, (y * 2) + r, z + 1, maxzoom));
}
}
node['children'] = kids;
}
return node;
};
split4 = function(items) {
var c, el, i, l, len, p, ref, results, x;
l = items.length;
p = Math.ceil(Math.log(l) / Math.log(4));
c = Math.max(1, Math.pow(4, p) / 4);
el = function(x, c) {
while (x.length < c) {
x.push(null);
}
return x;
};
ref = [items.slice(0, c), items.slice(c, c * 2), items.slice(c * 2, c * 3), items.slice(c * 3)];
results = [];
for (i = 0, len = ref.length; i < len; i++) {
x = ref[i];
results.push(el(x, c));
}
return results;
};
cell_layout = function(items) {
return [
{
y: 0,
x: 0,
item: items[0]
}, {
y: 0,
x: 1,
item: items[1]
}, {
y: 1,
x: 0,
item: items[2]
}, {
y: 1,
x: 1,
item: items[3]
}
];
};
tiler.render = render = function(items, tilewidth, tileheight, z, y, x) {
var g, i, j, kids, len, node, ref;
if (tilewidth == null) {
tilewidth = 256;
}
if (tileheight == null) {
tileheight = 256;
}
if (z == null) {
z = 0;
}
if (y == null) {
y = 0;
}
if (x == null) {
x = 0;
}
if (items.length === 1) {
x = items[0];
if (x === null) {
return null;
}
return zoom(x, '');
} else {
node = {};
node['text'] = '';
kids = [];
ref = cell_layout(split4(items));
for (i = 0, len = ref.length; i < len; i++) {
g = ref[i];
kids.push(render(g.item, tilewidth, tileheight, z + 1, (y * 2) + g.y, (x * 2) + g.x));
}
node.children = (function() {
var k, len1, results;
results = [];
for (k = 0, len1 = kids.length; k < len1; k++) {
j = kids[k];
if (j !== null) {
results.push(j);
}
}
return results;
})();
node.image = fourup((function() {
var k, len1, ref1, results;
ref1 = node.children;
results = [];
for (k = 0, len1 = ref1.length; k < len1; k++) {
j = ref1[k];
if (j !== null) {
results.push(j.image);
}
}
return results;
})(), tilewidth, tileheight);
return node;
}
};
tiler.layoutxyz = layoutxyz = function(n, x, y, z, outnode) {
var g, i, len, ref;
if (x == null) {
x = 0;
}
if (y == null) {
y = 0;
}
if (z == null) {
z = 0;
}
if (outnode == null) {
outnode = {};
}
outnode[x + "," + y + "," + z] = n;
if (n.children) {
ref = cell_layout(n.children);
for (i = 0, len = ref.length; i < len; i++) {
g = ref[i];
if (g.item) {
layoutxyz(g.item, (x * 2) + g.x, (y * 2) + g.y, z + 1, outnode);
}
}
}
return outnode;
};
tiler.fourup = fourup = function(images, tilewidth, tileheight) {
if (tilewidth == null) {
tilewidth = 256;
}
if (tileheight == null) {
tileheight = 256;
}
return function(done) {
var i, img, imgelts, len, loadcount, results, src, x;
loadcount = 0;
images = (function() {
var i, len, results;
results = [];
for (i = 0, len = images.length; i < len; i++) {
x = images[i];
if (x !== null) {
results.push(x);
}
}
return results;
})();
imgelts = [];
results = [];
for (i = 0, len = images.length; i < len; i++) {
src = images[i];
img = new Image;
imgelts.push(img);
img.addEventListener("load", function() {
var canvas, ctx, g, hh, hw, k, len1, ref;
if (++loadcount >= images.length) {
canvas = document.createElement("canvas");
canvas.width = tilewidth;
canvas.height = tileheight;
ctx = canvas.getContext("2d");
hw = tilewidth / 2;
hh = tileheight / 2;
ref = cell_layout(imgelts);
for (k = 0, len1 = ref.length; k < len1; k++) {
g = ref[k];
if (g.item) {
ctx.drawImage(g.item, g.x * hw, g.y * hh, hw, hh);
}
}
return done(null, canvas.toDataURL());
}
}, false);
if (typeof src === "function") {
console.log("inside 4up, deferring");
results.push(src(function(err, data) {
console.log(" inside 4up, GOT DATA");
return img.src = data;
}));
} else {
results.push(img.src = src);
}
}
return results;
};
};
tiler.expandzoom = expandzoom = function(node) {
var c, ret, tilespath;
if (node.zoomable) {
tilespath = node.image.replace(/\/[^\/]+$/, "");
var ext = node.image.match(/\.([^\.]+)$/);
if (ext != null) { ext = ext[1] };
ret = zoom(tiles_wrapper(tilespath, ext), node.text, node.url);
return ret;
}
if (node.children) {
node.children = (function() {
var i, len, ref, results;
ref = node.children;
results = [];
for (i = 0, len = ref.length; i < len; i++) {
c = ref[i];
if (c != null) {
results.push(expandzoom(c));
}
}
return results;
})();
}
return node;
};
/* DynamicTiles */
/*
A simple GridLayer extension that takes an external "nodes" object as option,
Nodes are keyed [x,y,z]
and expected to be of the form:
{
text: "My text",
image" "imagepath.jpg"
}
*/
L.GridLayer.DynamicTiles = L.GridLayer.extend({
createTile: function (coords, done) { // done = (err, tile)
// console.log("createTile", coords, this.options, this.options.nodes);
var tile = document.createElement('div'),
node = this.options.nodes[coords.x+","+coords.y+","+coords.z],
defer = false;
tile.classList.add("tile");
if (node != undefined) {
// console.log("NODE", node);
if (node.image) {
var img = document.createElement("img");
defer = true;
img.addEventListener("load", function () {
done(null, tile);
})
img.src = node.image;
tile.appendChild(img);
img.classList.add("imagetile");
}
if (node.text) {
//console.log("text", node.text);
var textdiv = document.createElement("div");
textdiv.innerHTML = node.text;
tile.appendChild(textdiv);
textdiv.classList.add("text");
}
// if (node.url) {
// console.log("NODE HAS URL!", node.url);
// var urldiv = document.createElement("div"),
// urllink = document.createElement("a"),
// m = node.url.search(/\/([^\/]+)$/);
// urllink.innerHTML = (m != null) ? m[1] : "LINK";
// urldiv.appendChild(urllink);
// urldiv.classList.add("url");
// tile.appendChild(urldiv);
// }
if (node.background) {
tile.style.color = node.background;
}
if (node.class) {
tile.classList.add(node.class);
}
tile.classList.add("z"+coords.z);
} else {
tile.innerHTML = [coords.x, coords.y, coords.z].join(', ');
tile.classList.add("coords");
}
// tile.style.outline = '1px solid red';
if (!defer) {
window.setTimeout(function () {
done(null, tile);
}, 250);
}
return tile;
}
});""
L.gridLayer.dynamicTiles = function(opts) {
return new L.GridLayer.DynamicTiles(opts);
};
}).call(this);
(function () {
function getjson (url, callback) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.onload = function() {
if (request.readyState == XMLHttpRequest.DONE && request.status >= 200 && request.status < 400) {
callback(null, JSON.parse(request.responseText));
} else {
callback("server error");
}
};
request.onerror = function() {
callback("connection error");
};
request.send();
}
var map = L.map('map', {
editable: true,
maxZoom: 100,
minZoom: 0,
zoom: 0,
crs: L.CRS.Simple,
center: new L.LatLng(0,0),
});
getjson("index.json", function (err, data) {
var nodes = (tiler.layoutxyz(tiler.expandzoom(data)));
map.addLayer( L.gridLayer.dynamicTiles({
minZoom: 0,
nodes: nodes
}) );
})
var yx = L.latLng,
xy = function(x, y) {
if (L.Util.isArray(x)) { // When doing xy([x, y]);
return yx(x[1], x[0]);
}
return yx(y, x); // When doing xy(x, y);
};
function absolutize (coords, scale, x, y) {
// turns inkscape x,y relative path coords into absolute
// nb flips the direction of the y axis as well
if (scale == null) { scale = 1.0 }
if (x == null) { x = 0.0 }
if (y == null) { y = 0.0 }
var c,
out = [];
for (var i=0, l=coords.length; i<l; i++) {
c = coords[i];
x += c[0]*scale; y -= c[1]*scale;
out.push([x, y]);
}
return out;
}
var xpath = [[0,0], [-7.89063,-10.6836], [7.40235,0], [4.47265,6.48438], [4.53125,-6.48438], [7.40235,0], [-7.89063,10.64453], [8.28125,11.23047], [-7.40234,0], [-4.92188,-6.91406], [-4.86328,6.91406], [-7.40234,0], [8.28125,-11.1914]];
xpath = absolutize(xpath, 10.3, 87, -128.5);
// flip it
xpath = xpath.map(function (x) { return [x[1], x[0]] });
// x = x.map(function (x) { return yx(x) });
var polyline = L.polyline(xpath, {color: '#ed4e47'}).addTo(map);
map.setView(xy(0.5 * 256, -0.5 * 256), 0);
})();
</script>
</body>
</html>

@ -0,0 +1,27 @@
all: index.json
archive.json:
python scripts/mediawiki.py gallery --name archive --recursive \
https://pzwiki.wdka.nl/mediadesign/Category:2016 \
https://pzwiki.wdka.nl/mediadesign/Category:2015 \
https://pzwiki.wdka.nl/mediadesign/Category:2014 \
https://pzwiki.wdka.nl/mediadesign/Category:2013 \
https://pzwiki.wdka.nl/mediadesign/Category:2012 \
https://pzwiki.wdka.nl/mediadesign/Category:2011 \
https://pzwiki.wdka.nl/mediadesign/Category:2010 \
https://pzwiki.wdka.nl/mediadesign/Category:2009 \
https://pzwiki.wdka.nl/mediadesign/Category:2008 \
https://pzwiki.wdka.nl/mediadesign/Category:2007 \
https://pzwiki.wdka.nl/mediadesign/Category:2006 \
https://pzwiki.wdka.nl/mediadesign/Category:2005 \
https://pzwiki.wdka.nl/mediadesign/Category:2004 > archive.json
drop.node.json: drop.json
cat drop.json | python scripts/leaflet.py gallery --recursive --direction 2 > drop.node.json
about.json:
python scripts/texthierarchy.py < about.txt > about.json
index.json: archive.json about.json drop.node.json
python scripts/includenodes.py xpub.top.json > index.json

@ -0,0 +1,166 @@
#!/usr/bin/python
from __future__ import print_function
from html5lib import parse
import os, sys
from argparse import ArgumentParser
from xml.etree import ElementTree as ET
def etree_indent(elem, level=0):
i = "\n" + level*" "
if len(elem):
if not elem.text or not elem.text.strip():
elem.text = i + " "
if not elem.tail or not elem.tail.strip():
elem.tail = i
for elem in elem:
etree_indent(elem, level+1)
if not elem.tail or not elem.tail.strip():
elem.tail = i
else:
if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i
def get_link_type (url):
lurl = url.lower()
if lurl.endswith(".html") or lurl.endswith(".htm"):
return "text/html"
elif lurl.endswith(".txt"):
return "text/plain"
elif lurl.endswith(".rss"):
return "application/rss+xml"
elif lurl.endswith(".atom"):
return "application/atom+xml"
elif lurl.endswith(".json"):
return "application/json"
elif lurl.endswith(".js") or lurl.endswith(".jsonp"):
return "text/javascript"
def pluralize (x):
if type(x) == list or type(x) == tuple:
return x
else:
return (x,)
def html5tidy (doc, charset="utf-8", title=None, scripts=None, links=None, indent=False):
if scripts:
script_srcs = [x.attrib.get("src") for x in doc.findall(".//script")]
for src in pluralize(scripts):
if src not in script_srcs:
script = ET.SubElement(doc.find(".//head"), "script", src=src)
script_srcs.append(src)
if links:
existinglinks = {}
for elt in doc.findall(".//link"):
href = elt.attrib.get("href")
if href:
existinglinks[href] = elt
for link in links:
linktype = link.get("type") or get_link_type(link["href"])
if link["href"] in existinglinks:
elt = existinglinks[link["href"]]
elt.attrib["rel"] = link["rel"]
else:
elt = ET.SubElement(doc.find(".//head"), "link", href=link["href"], rel=link["rel"])
if linktype:
elt.attrib["type"] = linktype
if "title" in link:
elt.attrib["title"] = link["title"]
if charset:
meta_charsets = [x.attrib.get("charset") for x in doc.findall(".//meta") if x.attrib.get("charset") != None]
if not meta_charsets:
meta = ET.SubElement(doc.find(".//head"), "meta", charset=args.charset)
if title != None:
titleelt = doc.find(".//title")
if not titleelt:
titleelt = ET.SubElement(doc.find(".//head"), "title")
titleelt.text = title
if indent:
etree_indent(doc)
return doc
if __name__ == "__main__":
p = ArgumentParser("")
p.add_argument("input", nargs="?", default=None)
p.add_argument("--indent", default=False, action="store_true")
p.add_argument("--mogrify", default=False, action="store_true", help="modify file in place")
p.add_argument("--method", default="html", help="method, default: html, values: html, xml, text")
p.add_argument("--output", default=None, help="")
p.add_argument("--title", default=None, help="ensure/add title tag in head")
p.add_argument("--charset", default="utf-8", help="ensure/add meta tag with charset")
p.add_argument("--script", action="append", default=[], help="ensure/add script tag")
# <link>s, see https://www.w3.org/TR/html5/links.html#links
p.add_argument("--stylesheet", action="append", default=[], help="ensure/add style link")
p.add_argument("--alternate", action="append", default=[], nargs="+", help="ensure/add alternate links (optionally followed by a title and type)")
p.add_argument("--next", action="append", default=[], nargs="+", help="ensure/add alternate link")
p.add_argument("--prev", action="append", default=[], nargs="+", help="ensure/add alternate link")
p.add_argument("--search", action="append", default=[], nargs="+", help="ensure/add search link")
p.add_argument("--rss", action="append", default=[], nargs="+", help="ensure/add alternate link of type application/rss+xml")
p.add_argument("--atom", action="append", default=[], nargs="+", help="ensure/add alternate link of type application/atom+xml")
args = p.parse_args()
links = []
def add_links (links, items, rel, _type=None):
for href in items:
d = {}
d["rel"] = rel
if _type:
d["type"] = _type
if type(href) == list:
if len(href) == 1:
d["href"] = href[0]
elif len(href) == 2:
d["href"] = href[0]
d["title"] = href[1]
elif len(href) == 3:
d["href"] = href[0]
d["title"] = href[1]
d["type"] = href[2]
else:
continue
else:
d["href"] = href
links.append(d)
for rel in ("stylesheet", "alternate", "next", "prev", "search"):
add_links(links, getattr(args, rel), rel)
for item in args.rss:
add_links(links, item, rel="alternate", _type="application/rss+xml")
for item in args.atom:
add_links(links, item, rel="alternate", _type="application/atom+xml")
# INPUT
if args.input:
fin = open(args.input)
else:
fin = sys.stdin
doc = parse(fin, namespaceHTMLElements=False)
if fin != sys.stdin:
fin.close()
html5tidy(doc, scripts=args.script, links=links, title=args.title, indent=args.indent)
# OUTPUT
tmppath = None
if args.output:
fout = open(args.output, "w")
elif args.mogrify:
tmppath = args.input+".tmp"
fout = open(tmppath, "w")
else:
fout = sys.stdout
print (ET.tostring(doc, method=args.method), file=fout)
if fout != sys.stdout:
fout.close()
if tmppath:
os.rename(args.input, args.input+"~")
os.rename(tmppath, args.input)

@ -0,0 +1,63 @@
#!/usr/bin/env python
from PIL import Image
import re
def fitbox (boxw, boxh, w, h):
rw = boxw
rh = int(rw * (float(h) / w))
if (rh >= boxh):
rh = boxh
rw = int(rh * (float(w) / h))
return rw, rh
def tile_image (im, maxz=0, tilew=256, tileh=256, base=".", template="z{0[z]}y{0[y]}x{0[x]}.jpg", bgcolor=(0,0,0)):
z = 0
boxw, boxh = tilew, tileh
alpha = bgcolor != None # not template.endswith("jpg")
while True:
rw, rh = fitbox(boxw, boxh, im.size[0], im.size[1])
rim = im.resize((rw, rh), Image.ANTIALIAS)
if bgcolor:
tim = Image.new("RGB", (boxw, boxh), bgcolor)
tim.paste(rim, (0, 0))
else:
tim = Image.new("RGBA", (boxw, boxh))
tim.paste(rim, (0, 0))
rows, cols = 2**z, 2**z
for r in range(rows):
for c in range(cols):
ix = c*tileh
iy = r*tilew
cim = tim.crop((ix, iy, ix+tilew, iy+tileh))
op = base + template.format({'z':z, 'x':c, 'y':r})
# if not alpha:
# cim = cim.convert("RGB")
cim.save(op)
z += 1
if z>maxz:
break
boxw *= 2
boxh *= 2
if __name__ == "__main__":
from argparse import ArgumentParser
p = ArgumentParser("tile an image")
p.add_argument("--tilewidth", type=int, default=256, help="default: 256")
p.add_argument("--tileheight", type=int, default=256, help="default: 256")
p.add_argument("input")
p.add_argument("--output", default="./tile", help="output path, default: ./tile")
p.add_argument("--tilename", default="Z{z}Y{y}X{x}.jpg", help="template for tiles, default: Z{z}Y{y}X{x}.jpg")
p.add_argument("--background", default="0,0,0", help="background color, default: 0,0,0")
p.add_argument("--zoom", type=int, default=0, help="default 0")
args = p.parse_args()
im = Image.open(args.input)
tilename = re.sub(r"\{(.+?)\}", r"{0[\1]}", args.tilename)
background = tuple([int(x) for x in args.background.split(",")])
tile_image (im, args.zoom, args.tilewidth, args.tileheight, args.output, tilename, background)

@ -0,0 +1,28 @@
from __future__ import print_function
from argparse import ArgumentParser
ap = ArgumentParser("")
ap.add_argument("input")
args = ap.parse_args()
import json
with open(args.input) as f:
node = json.load(f)
def expand (node):
if node == None:
return node
retnode = node
if "@include" in node:
with open(node['@include']) as f:
retnode = json.load(f)
if "text" in node:
retnode['text'] = node['text']
if "children" in retnode:
retnode['children'] = [expand(c) for c in retnode['children']]
return retnode
print (json.dumps(expand(node), indent=2))

@ -0,0 +1,711 @@
#!/usr/bin/env python
from __future__ import print_function, division
from argparse import ArgumentParser
from imagetile2 import tile_image
from PIL import Image
import os, json, sys, re, datetime, urlparse
from math import ceil, log
"""
Maybe a better name for this script is tiling or tiler as it's not particularly leaflet specific.
"""
def tiles_path_for (n):
return n + ".tiles"
def autolink (text):
def sub (m):
return u'<a href="{0}">LINK</a>'.format(m.group(0))
return re.sub(r"https?://[\S]+", sub, text, re.I)
def parse8601 (t, fmt=None):
""" simple 8601 parser that doesn't care about more than YMDHMS"""
# 2016-11-16T14:13:40.379857
m = re.search(r"(?P<year>\d\d\d\d)-(?P<month>\d\d)-(?P<day>\d\d)T(?P<hour>\d\d):(?P<minute>\d\d):(?P<second>\d\d)", t)
if m:
d = m.groupdict()
ret = datetime.datetime(int(d['year']), int(d['month']), int(d['day']), int(d['hour']), int(d['minute']), int(d['second']))
if fmt:
return ret.strftime(fmt)
else:
return ret
class tiles_wrapper (object):
""" Image wrapper abstraction... include URL to original + caption
"""
def __init__(self, path, url=None, text=None, tilename="z{0[z]}y{0[y]}x{0[x]}.png"):
self.path = path
# self.item = item
self.url = url
self.text = text
self.tilename = tilename
def get_tile_path (self, z, y, x):
return os.path.join(self.path, self.tilename.format({'z':z,'y':y,'x':x}))
def zoom (self):
""" return serialized version of self """
node = {}
node['zoomable'] = True
if self.text:
node['text'] = self.text
else:
# autotext is a link to the url showing the basename
_, basename = os.path.split(self.url)
node['text'] = u"<a href=\"{0}\">{1}</a>".format(self.url, basename)
node['url'] = self.url
node['image'] = self.get_tile_path(0, 0, 0)
return node
def zoom_recursive (self, caption, x=0, y=0, z=0, maxzoom=3):
""" old style zoom in place -- ie render self to child nodes """
node = {}
node['text'] = self.text
node['image'] = self.get_tile_path(z, y, x)
if z < maxzoom:
kids = []
for r in range(2):
for c in range(2):
kids.append(self.zoom_recursive(caption, (x*2)+c, (y*2)+r, z+1, maxzoom))
node['children'] = kids
return node
def cell_layout(items, w=2):
i = 0
for r in range(w):
for c in range(w):
if i<len(items):
yield items[i], c, r
i+=1
def fourup (imgs, w, h):
print ("fourup", imgs, w, h, file=sys.stderr)
oi = Image.new("RGBA", (w, h))
cw = w//2
ch = h//2
i = 0
for impath, c, r in cell_layout(imgs):
if impath:
im = Image.open(impath)
im.thumbnail((cw, ch))
oi.paste(im, (c*cw, r*ch))
return oi
def split4(items):
""" returns 4 lists where len(l) is a power of 4 """
l = len(items)
p = int(ceil(log(l, 4)))
# print ("{0} items {1} {2} {3}".format(l, p, 2**p, 4**p))
c = int((4**p)/ 4)
# c = int(ceil(len(items) / 4))
def el (x, c): # ensurelength
while len(x) < c:
x.append(None)
return x
ret = [items[0:c],items[c:c*2],items[c*2:c*3],items[c*3:]]
return tuple([el(x, c) for x in ret])
def gridrender (items, basename, tilewidth=256, tileheight=256, z=0, y=0, x=0):
""" items are now nodes proper """
""" Takes a list of nodes and returns a new node where items are arranged in a cascade of nodes such that
all items appear at the same (z) level -- side by side
Uses fourup to (recursively) produce a composite image of the underlying tiles.
"""
print ("gridrender {0} items".format(len(items)), file=sys.stderr)
if len(items) == 1:
x = items[0]
if x == None:
return None
return x # x.zoom()
else:
node = {}
node['text'] = ''
kids = []
for group, x2, y2 in cell_layout(split4(items)):
kids.append(gridrender(group, basename, tilewidth, tileheight, z+1, (y*2)+y2, (x*2)+x2))
node['children'] = [j for j in kids if j != None]
newim = fourup([j.get("image") for j in node['children'] if j != None and j.get("image")], tilewidth, tileheight)
node['image'] = newim
newimpath = "{0}.z{1}y{2}x{3}.png".format(basename, z, y, x)
newim.save(newimpath)
node['image'] = newimpath
print ("Created 4up image {0}".format(newimpath), file=sys.stderr)
return node
def recursiverender (items, basename, tilewidth=256, tileheight=256, direction=3, z=0):
node = {}
node['text'] = ''
# if len(items) >=1 and 'date' in items[0].item:
# node['text'] = items[0].item['date']
# else:
# node['text'] = ''
# node['image'] = ''
node['children'] = cc = [None, None, None, None]
ai = 0
for x in items[:3]:
# cap = os.path.splitext(os.path.basename(x.path))[0]
# cc.append(x) # x.zoom()
if (ai == direction):
ai += 1
cc[ai] = x
ai += 1;
rest = items[3:]
if rest:
# recurse
# cc.append(recursiverender(rest, basename, tilewidth, tileheight, z+1))
cc[direction] = recursiverender(rest, basename, tilewidth, tileheight, direction, z+1)
newim = fourup([x.get("image") for x in node['children'] if x != None and x.get("image")], tilewidth, tileheight)
# simplified name works just because there's only one generated tile per level
newimpath = u"{0}.z{1}.png".format(basename, z)
newim.save(newimpath)
node['image'] = newimpath
return node
def layoutxyz (n, x=0, y=0, z=0, outnode={}):
# print ("layout", n, x, y, z, file=sys.stderr)
outnode["{0},{1},{2}".format(x,y,z)] = {
"text": n['text'],
"image": n['image']
}
if 'children' in n:
for child, cx, cy in cell_layout(n['children']):
layout(child, (x*2)+cx, (y*2)+cy, z+1, outnode)
return outnode
def html (node, title):
page = u"""<!DOCTYPE html>
<html>
<head>
<title>""" + title + u"""</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<script src="/lib/leaflet-1.0.1/leaflet.js"></script>
<link href="/lib/leaflet-1.0.1/leaflet.css" rel="stylesheet" type="text/css">
<link href="map.css" rel="stylesheet" type="text/css">
</head>
<body>
<div id="frame" style="position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px">
<div id="map" style="width: 100%; height: 100%; background: black"></div>
<div id="text" style="position: absolute; left: 50px; top: 10px; width: auto; color: white">
</div>
</div>
<script>
(function() {
// warning CHANGES TO THIS CODE NEED TO BE ROLLED BACK INTO leaflet.py
var cell_layout, expandzoom, fourup, layoutxyz, render, split4, tiler, tiles_wrapper, zoom;
window.tiler = tiler = {};
tiler.tiles_wrapper = tiles_wrapper = function(path, ext) {
if (ext == null) { ext = "jpg"; }
var ret = {};
ret.get_tile_path = function(z, y, x) {
return path + ("/z"+z+"y"+y+"x"+x+"."+ext);
};
return ret;
};
tiler.zoom = zoom = function(tiles, caption, url, x, y, z, maxzoom) {
var c, i, k, kids, len, len1, node, r, ref, ref1;
if (x == null) {
x = 0;
}
if (y == null) {
y = 0;
}
if (z == null) {
z = 0;
}
if (maxzoom == null) {
maxzoom = 3;
}
node = {};
if (caption && x === 0 && y === 0) {
node['text'] = caption;
}
var lastc = Math.pow(2, z) - 1;
if (url && x === 0 && y === lastc) {
node['url'] = url
}
node['image'] = tiles.get_tile_path(z, y, x);
if (z < maxzoom) {
kids = [];
ref = [0, 1];
for (i = 0, len = ref.length; i < len; i++) {
r = ref[i];
ref1 = [0, 1];
for (k = 0, len1 = ref1.length; k < len1; k++) {
c = ref1[k];
kids.push(zoom(tiles, caption, url, (x * 2) + c, (y * 2) + r, z + 1, maxzoom));
}
}
node['children'] = kids;
}
return node;
};
split4 = function(items) {
var c, el, i, l, len, p, ref, results, x;
l = items.length;
p = Math.ceil(Math.log(l) / Math.log(4));
c = Math.max(1, Math.pow(4, p) / 4);
el = function(x, c) {
while (x.length < c) {
x.push(null);
}
return x;
};
ref = [items.slice(0, c), items.slice(c, c * 2), items.slice(c * 2, c * 3), items.slice(c * 3)];
results = [];
for (i = 0, len = ref.length; i < len; i++) {
x = ref[i];
results.push(el(x, c));
}
return results;
};
cell_layout = function(items) {
return [
{
y: 0,
x: 0,
item: items[0]
}, {
y: 0,
x: 1,
item: items[1]
}, {
y: 1,
x: 0,
item: items[2]
}, {
y: 1,
x: 1,
item: items[3]
}
];
};
tiler.render = render = function(items, tilewidth, tileheight, z, y, x) {
var g, i, j, kids, len, node, ref;
if (tilewidth == null) {
tilewidth = 256;
}
if (tileheight == null) {
tileheight = 256;
}
if (z == null) {
z = 0;
}
if (y == null) {
y = 0;
}
if (x == null) {
x = 0;
}
if (items.length === 1) {
x = items[0];
if (x === null) {
return null;
}
return zoom(x, '');
} else {
node = {};
node['text'] = '';
kids = [];
ref = cell_layout(split4(items));
for (i = 0, len = ref.length; i < len; i++) {
g = ref[i];
kids.push(render(g.item, tilewidth, tileheight, z + 1, (y * 2) + g.y, (x * 2) + g.x));
}
node.children = (function() {
var k, len1, results;
results = [];
for (k = 0, len1 = kids.length; k < len1; k++) {
j = kids[k];
if (j !== null) {
results.push(j);
}
}
return results;
})();
node.image = fourup((function() {
var k, len1, ref1, results;
ref1 = node.children;
results = [];
for (k = 0, len1 = ref1.length; k < len1; k++) {
j = ref1[k];
if (j !== null) {
results.push(j.image);
}
}
return results;
})(), tilewidth, tileheight);
return node;
}
};
tiler.layoutxyz = layoutxyz = function(n, x, y, z, outnode) {
var g, i, len, ref;
if (x == null) {
x = 0;
}
if (y == null) {
y = 0;
}
if (z == null) {
z = 0;
}
if (outnode == null) {
outnode = {};
}
outnode[x + "," + y + "," + z] = n;
if (n.children) {
ref = cell_layout(n.children);
for (i = 0, len = ref.length; i < len; i++) {
g = ref[i];
if (g.item) {
layoutxyz(g.item, (x * 2) + g.x, (y * 2) + g.y, z + 1, outnode);
}
}
}
return outnode;
};
tiler.fourup = fourup = function(images, tilewidth, tileheight) {
if (tilewidth == null) {
tilewidth = 256;
}
if (tileheight == null) {
tileheight = 256;
}
return function(done) {
var i, img, imgelts, len, loadcount, results, src, x;
loadcount = 0;
images = (function() {
var i, len, results;
results = [];
for (i = 0, len = images.length; i < len; i++) {
x = images[i];
if (x !== null) {
results.push(x);
}
}
return results;
})();
imgelts = [];
results = [];
for (i = 0, len = images.length; i < len; i++) {
src = images[i];
img = new Image;
imgelts.push(img);
img.addEventListener("load", function() {
var canvas, ctx, g, hh, hw, k, len1, ref;
if (++loadcount >= images.length) {
canvas = document.createElement("canvas");
canvas.width = tilewidth;
canvas.height = tileheight;
ctx = canvas.getContext("2d");
hw = tilewidth / 2;
hh = tileheight / 2;
ref = cell_layout(imgelts);
for (k = 0, len1 = ref.length; k < len1; k++) {
g = ref[k];
if (g.item) {
ctx.drawImage(g.item, g.x * hw, g.y * hh, hw, hh);
}
}
return done(null, canvas.toDataURL());
}
}, false);
if (typeof src === "function") {
console.log("inside 4up, deferring");
results.push(src(function(err, data) {
console.log(" inside 4up, GOT DATA");
return img.src = data;
}));
} else {
results.push(img.src = src);
}
}
return results;
};
};
tiler.expandzoom = expandzoom = function(node) {
var c, ret, tilespath;
if (node.zoomable) {
tilespath = node.image.replace(/\/[^\/]+$/, "");
var ext = node.image.match(/\.([^\.]+)$/);
if (ext != null) { ext = ext[1] };
ret = zoom(tiles_wrapper(tilespath, ext), node.text, node.url);
return ret;
}
if (node.children) {
node.children = (function() {
var i, len, ref, results;
ref = node.children;
results = [];
for (i = 0, len = ref.length; i < len; i++) {
c = ref[i];
if (c != null) {
results.push(expandzoom(c));
}
}
return results;
})();
}
return node;
};
/* DynamicTiles */
/*
A simple GridLayer extension that takes an external "nodes" object as option,
Nodes are keyed [x,y,z]
and expected to be of the form:
{
text: "My text",
image" "imagepath.jpg"
}
*/
L.GridLayer.DynamicTiles = L.GridLayer.extend({
createTile: function (coords, done) { // done = (err, tile)
// console.log("createTile", coords, this.options, this.options.nodes);
var tile = document.createElement('div'),
node = this.options.nodes[coords.x+","+coords.y+","+coords.z],
defer = false;
tile.classList.add("tile");
if (node != undefined) {
// console.log("NODE", node);
if (node.image) {
var img = document.createElement("img");
defer = true;
img.addEventListener("load", function () {
done(null, tile);
})
img.src = node.image;
tile.appendChild(img);
img.classList.add("imagetile");
}
if (node.text) {
//console.log("text", node.text);
var textdiv = document.createElement("div");
textdiv.innerHTML = node.text;
tile.appendChild(textdiv);
textdiv.classList.add("text");
}
// if (node.url) {
// console.log("NODE HAS URL!", node.url);
// var urldiv = document.createElement("div"),
// urllink = document.createElement("a"),
// m = node.url.search(/\/([^\/]+)$/);
// urllink.innerHTML = (m != null) ? m[1] : "LINK";
// urldiv.appendChild(urllink);
// urldiv.classList.add("url");
// tile.appendChild(urldiv);
// }
if (node.background) {
tile.style.color = node.background;
}
if (node.class) {
tile.classList.add(node.class);
}
tile.classList.add("z"+coords.z);
} else {
tile.innerHTML = [coords.x, coords.y, coords.z].join(', ');
tile.classList.add("coords");
}
// tile.style.outline = '1px solid red';
if (!defer) {
window.setTimeout(function () {
done(null, tile);
}, 250);
}
return tile;
}
});""
L.gridLayer.dynamicTiles = function(opts) {
return new L.GridLayer.DynamicTiles(opts);
};
}).call(this);
(function () {
function getjson (url, callback) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.onload = function() {
if (request.readyState == XMLHttpRequest.DONE && request.status >= 200 && request.status < 400) {
callback(null, JSON.parse(request.responseText));
} else {
callback("server error");
}
};
request.onerror = function() {
callback("connection error");
};
request.send();
}
var map = L.map('map', {
editable: true,
maxZoom: 100,
minZoom: 0,
zoom: 0,
crs: L.CRS.Simple,
center: new L.LatLng(0,0),
});
var data = """ + json.dumps(node) + """;
var nodes = (tiler.layoutxyz(tiler.expandzoom(data)));
map.addLayer( L.gridLayer.dynamicTiles({
minZoom: 0,
nodes: nodes
}) );
var yx = L.latLng,
xy = function(x, y) {
if (L.Util.isArray(x)) { // When doing xy([x, y]);
return yx(x[1], x[0]);
}
return yx(y, x); // When doing xy(x, y);
};
// map.setView(xy(0.5 * 256, -0.5 * 256), 0);
})();
</script>
</body>
</html>
"""
return page
def make_gallery(args):
"""
to do -- separate the actual tiling process...
make tiling a separate pass ON THE ACTUAL NODE jSON
NB: this command accepts two different kinds of input.
1. One or more images as (argv) arguments -or-
2. A JSON stream (one object per line) on stdin.
"""
bgcolor = None # (0, 0, 0)
items = []
if args.input:
for x in args.input:
i = {'url': x}
items.append(i)
else:
for line in sys.stdin:
line = line.rstrip()
if line and not line.startswith("#"):
item = json.loads(line)
items.append(item)
# Ensure / Generate tiles per image
items.sort(key=lambda x: x['url'])
tiles = []
for item in items:
n = item['url']
# print (n, file=sys.stderr)
path = os.path.join(args.tilespath, n)
# TODO date format...
caption = ''
if 'text' or 'date' in item:
caption += u'<p class="caption">';
if 'text' in item:
caption += u'<span class="text">{0}</span>'.format(autolink(item['text']))
if 'date' in item:
dt = parse8601(item['date'], "%d %b %Y")
caption += u'<span class="date">{0}</span>'.format(dt)
if 'url' in item:
ext = os.path.splitext(urlparse.urlparse(item['url']).path)[1]
if ext:
ext = ext[1:].upper()
caption += u'<a class="url" href="{0}">{1}</a>'.format(item['url'], ext)
if 'text' or 'date' in item:
caption += u'</p>';
t = tiles_wrapper(path, item['url'], text=caption)
tiles.append(t)
tile0 = t.get_tile_path(0, 0, 0) # os.path.join(path, args.tilename.format({'x': 0, 'y': 0, 'z': 0}))
if not os.path.exists(tile0) or args.force:
print ("Tiling {0}".format(n), file=sys.stderr)
try:
im = Image.open(n)
try:
os.makedirs(path)
except OSError:
pass
tile_image(im, args.zoom, args.tilewidth, args.tileheight, path+"/", args.tilename, bgcolor)
# tiles.append(t)
except IOError as e:
print ("Missing {0}, skipping".format(n), file=sys.stderr)
tiles = tiles[:-1]
# DO THE LAYOUT, generating intermediate tiles (zoom outs)
if args.reverse:
tiles.reverse()
tiles = [t.zoom() for t in tiles]
basename = os.path.join(args.tilespath, args.name)
if args.recursive:
root_node = recursiverender(tiles, basename, args.tilewidth, args.tileheight, args.direction)
else:
root_node = gridrender(tiles, basename, args.tilewidth, args.tileheight)
# OUTPUT ROOT NODE
if args.html:
print (html(root_node, args.name))
else:
print (json.dumps(root_node, indent=args.indent))
if __name__ == "__main__":
ap = ArgumentParser("")
ap.add_argument("--basepath", default=".")
ap.add_argument("--baseuri", default="")
ap.add_argument("--tilespath", default="tiles")
ap.add_argument("--tilewidth", type=int, default=256)
ap.add_argument("--tileheight", type=int, default=256)
ap.add_argument("--zoom", type=int, default=3)
ap.add_argument("--tilename", default="z{0[z]}y{0[y]}x{0[x]}.png")
ap.add_argument("--reverse", default=False, action="store_true")
ap.add_argument("--indent", default=2, type=int)
ap.add_argument("--recursive", default=False, action="store_true")
ap.add_argument("--force", default=False, action="store_true")
subparsers = ap.add_subparsers(help='sub-command help')
ap_gallery = subparsers.add_parser('gallery', help='Create a grid gallery of images')
ap_gallery.add_argument("input", nargs="*")
ap_gallery.add_argument("--html", default=False, action="store_true")
ap_gallery.add_argument("--recursive", default=False, action="store_true")
ap_gallery.add_argument("--direction", type=int, default=3, help="cell to recursively expand into, 0-3, default: 3 (bottom-right)")
ap_gallery.add_argument("--name", default="gallery")
ap_gallery.set_defaults(func=make_gallery)
args = ap.parse_args()
args.func(args)

@ -0,0 +1,318 @@
from __future__ import print_function
import os, sys, re, urllib, urlparse, html5lib, json
from PIL import Image
from math import log
from argparse import ArgumentParser
from urllib2 import urlopen
from xml.etree import ElementTree as ET
# from wiki_get_html import page_html
from mwclient import Site
from mwclient.page import Page
from leaflet import tiles_wrapper, recursiverender, gridrender, html
from imagetile2 import tile_image
def wiki_url_to_title (url):
return urllib.unquote(url.split("/")[-1])
def parse_gallery(t):
""" returns [(imagepageurl, caption, articleurl), ...] """
galleryitems = t.findall(".//li[@class='gallerybox']")
items = []
for i in galleryitems:
image_link = i.find(".//a[@class='image']")
src = None
captiontext = None
article = None
if image_link != None:
src = image_link.attrib.get("href")
# src = src.split("/")[-1]
caption = i.find(".//*[@class='gallerytext']")
if caption:
captiontext = ET.tostring(caption, method="html")
articlelink = caption.find(".//a")
if articlelink != None:
article = articlelink.attrib.get("href")
# f = wiki.Pages[imgname]
# items.append((f.imageinfo['url'], captiontext))
items.append((src, captiontext, article))
return items
def mwfilepage_to_url (wiki, url):
filename = urllib.unquote(url.split("/")[-1])
page = wiki.Pages[filename]
return page, page.imageinfo['url']
def url_to_path (url):
""" https://pzwiki.wdka.nl/mediadesign/File:I-could-have-written-that_these-are-the-words_mb_300dpi.png """
path = urllib.unquote(urlparse.urlparse(url).path)
return "/".join(path.split("/")[3:])
def wiki_absurl (wiki, url):
ret = ''
if type(wiki.host) == tuple:
ret = wiki.host[0]+"://"+wiki.host[1]
else:
ret = "http://"+wiki.host
return urlparse.urljoin(ret, url)
def wiki_title_to_url (wiki, title):
""" relies on wiki.site['base'] being set to the public facing URL of the Main page """
ret = ''
parts = urlparse.urlparse(wiki.site['base'])
base, main_page = os.path.split(parts.path)
ret = parts.scheme+"://"+parts.netloc+base
p = wiki.pages[title]
ret += "/" + p.normalize_title(p.name)
return ret
def ensure_wiki_image_tiles (wiki, imagepageurl, text='', basepath="tiles", force=False, bgcolor=None, tilewidth=256, tileheight=256, zoom=3):
print ("ensure_wiki_image_tiles", imagepageurl, file=sys.stderr)
page, imageurl = mwfilepage_to_url(wiki, imagepageurl)
path = os.path.join(basepath, url_to_path(imageurl))
print ("imageurl, path", imageurl, path, file=sys.stderr)
ret = tiles_wrapper(path, imagepageurl, text=text)
tp = ret.get_tile_path(0, 0, 0)
if os.path.exists(tp) and not force:
return ret
try:
os.makedirs(path)
except OSError:
pass
im = Image.open(urlopen(imageurl))
tile_image(im, zoom, tilewidth, tileheight, path+"/", ret.tilename, bgcolor)
return ret
def textcell (paras):
node = {}
node['text'] = paras[:1]
moretext = paras[1:]
if moretext:
node['children'] = [textcell([x]) for x in moretext]
return node
def name_to_path (name):
return name.replace("/", "_")
def render_article (wiki, ref, basepath="tiles", depth=0, maxdepth=3):
print ("render_article", ref, file=sys.stderr)
if type(ref) == Page:
page = ref
title = page.name
ref = wiki_title_to_url(wiki, page.name)
elif ref.startswith("http"):
title = wiki_url_to_title(ref)
page = wiki.pages[title]
else:
title = ref
page = wiki.pages[title]
ref = wiki_title_to_url(wiki, page.name)
# pagetext = page.text()
# print ("WIKI PARSE", title, file=sys.stderr)
parse = wiki.parse(page=title)
html = parse['text']['*']
# print ("GOT HTML ", html, file=sys.stderr)
tree = html5lib.parse(html, treebuilder="etree", namespaceHTMLElements=False)
body = tree.find("./body")
paras = []
images = []
imgsrcs = {}
for c in body:
if c.tag == "p":
# filter out paras like <p><br></p> but checking text-only render length
ptext = ET.tostring(c, encoding="utf-8", method="text").strip()
if len(ptext) > 0:
ptext = ET.tostring(c, encoding="utf-8", method="html").strip()
paras.append(ptext)
elif c.tag == "ul" and c.attrib.get("class") != None and "gallery" in c.attrib.get("class"):
# print ("GALLERY")
gallery = parse_gallery(c)
# Ensure image is downloaded ... at least the 00 image...
for src, caption, article in gallery:
src = wiki_absurl(wiki, src)
if src in imgsrcs:
continue
imgsrcs[src] = True
print ("GalleryImage", src, caption, article, file=sys.stderr)
# if article and depth < maxdepth:
# article = wiki_absurl(wiki, article)
# images.append(render_article(wiki, article, caption, basepath, depth+1, maxdepth))
# else:
images.append(ensure_wiki_image_tiles(wiki, src, caption, basepath).zoom())
for a in body.findall('.//a[@class="image"]'):
caption = a.attrib.get("title", '')
src = wiki_absurl(wiki, a.attrib.get("href"))
# OEI... skippin svg for the moment (can't go straight to PIL)
if src.endswith(".svg"):
continue
print (u"Image_link {0}:'{1}'".format(src, caption).encode("utf-8"), file=sys.stderr)
if src in imgsrcs:
continue
imgsrcs[src] = True
images.append(ensure_wiki_image_tiles(wiki, src, caption, basepath).zoom())
print ("{0} paras, {1} images".format(len(paras), len(images)), file=sys.stderr)
if title == None:
title = page.name
basename = "tiles/" + name_to_path(page.name)
# gallerynode = gridrender(images, basename)
# return gallerynode
cells = []
if len(paras) > 0:
cells.append(textcell(paras))
cells.extend(images)
ret = recursiverender(cells, basename)
ret['text'] = u"""<p class="caption"><span class="text">{0}</span><a class="url" href="{1}">WIKI</a></p>""".format(title, ref)
if images:
ret['image'] = images[0]['image']
return ret
# article = {}
# article['text'] = title
# article['children'] = children = []
# children.append(textcell(paras))
# for iz in images[:2]:
# if 'image' not in article and 'image' in iz:
# article['image'] = iz['image']
# children.append(iz)
# restimages = images[2:]
# if len(restimages) == 1:
# children.append(restimages[0])
# elif len(restimages) > 1:
# children.append(gridrender(restimages, basename))
# return article
def render_category (wiki, cat, output="tiles"):
print ("Render Category", cat, file=sys.stderr)
# if type(cat) == Page:
# page = ref
# title = page.name
# ref = wiki_title_to_url(wiki, page.name)
if cat.startswith("http"):
title = wiki_url_to_title(cat)
cat = wiki.pages[title]
else:
title = ref
cat = wiki.pages[cat]
# ref = wiki_title_to_url(wiki, cat.name)
print ("cat", cat, file=sys.stderr)
pages = []
for m in cat.members():
pages.append(m)
pages.sort(key=lambda x: x.name)
pagenodes = [render_article(wiki, x.name) for x in pages]
for page, node in zip(pages, pagenodes):
node['text'] = u"""<p class="caption"><span class="text">{0}</span><a class="url" href="{1}">WIKI</a></p>""".format(page.name, wiki_title_to_url(wiki, page.name))
ret = gridrender(pagenodes, output+"/"+cat.name.replace(":", "_"))
ret['text'] = u"""<p class="caption"><a class="url" href="{0}">{1}</a></p>""".format(wiki_title_to_url(wiki, cat.name), cat.name)
return ret
# for p in pages:
# print (p.name, wiki_title_to_url(wiki, p.name))
def make_category (args):
wiki = Site((args.wikiprotocol, args.wikihost), path=args.wikipath)
root_node = render_category(wiki, args.category)
if args.html:
print (html(root_node, ""))
else:
print (json.dumps(root_node, indent=2))
def make_article (args):
wiki = Site((args.wikiprotocol, args.wikihost), path=args.wikipath)
root_node = render_article(wiki, args.wikipage)
if args.html:
print (html(root_node, ""))
else:
print (json.dumps(root_node, indent=2))
def make_gallery(args):
wiki = Site((args.wikiprotocol, args.wikihost), path=args.wikipath)
# apiurl = args.wikiprotocol+"://"+args.wikihost+args.wikipath+"api.php"
if len(args.wikipage) == 1:
root_node = render_article(wiki, args.wikipage[0])
else:
children = []
for wikipage in args.wikipage:
print ("rendering", wikipage, file=sys.stderr)
if "Category:" in wikipage:
print ("rendering", wikipage, file=sys.stderr)
cnode = render_category(wiki, wikipage, args.output)
else:
cnode = render_article(wiki, wikipage)
children.append(cnode)
if args.recursive:
root_node = recursiverender(children, args.output+"/"+args.name, direction=1)
else:
root_node = gridrender(children, args.output+"/"+args.name, direction=1)
if args.html:
print (html(root_node, ""))
else:
print (json.dumps(root_node, indent=2))
def testwiki (args):
return Site((args.wikiprotocol, args.wikihost), path=args.wikipath)
if __name__ == "__main__":
ap = ArgumentParser("")
ap.add_argument("--wikiprotocol", default="https")
ap.add_argument("--wikihost", default="pzwiki.wdka.nl")
ap.add_argument("--wikipath", default="/mw-mediadesign/")
ap.add_argument("--wikishortpath", default="/mediadesign/")
ap.add_argument("--tilewidth", type=int, default=256)
ap.add_argument("--tileheight", type=int, default=256)
# ap.add_argument("--zoom", type=int, default=3)
ap.add_argument("--output", default="tiles")
# ap.add_argument("--title", default="TITLE")
subparsers = ap.add_subparsers(help='sub-command help')
ap_article = subparsers.add_parser('article', help='Render an article')
ap_article.add_argument("wikipage")
ap_article.add_argument("--html", default=False, action="store_true")
ap_article.set_defaults(func=make_article)
ap_gallery = subparsers.add_parser('gallery', help='Render a gallery of articles')
ap_gallery.add_argument("wikipage", nargs="+")
ap_gallery.add_argument("--html", default=False, action="store_true")
ap_gallery.add_argument("--recursive", default=False, action="store_true")
ap_gallery.add_argument("--direction", type=int, default=3, help="cell to recursively expand into, 0-3, default: 3 (bottom-right)")
ap_gallery.add_argument("--name", default=None)
ap_gallery.set_defaults(func=make_gallery)
ap_gallery = subparsers.add_parser('testwiki', help='Render a gallery of articles')
ap_gallery.set_defaults(func=testwiki)
ap_article = subparsers.add_parser('category', help='Render an article')
ap_article.add_argument("category")
ap_article.add_argument("--html", default=False, action="store_true")
ap_article.set_defaults(func=make_category)
args = ap.parse_args()
ret = args.func(args)

@ -0,0 +1,32 @@
from __future__ import print_function
from html5lib import parse
import sys, json
from xml.etree import ElementTree as ET
def process (f):
stack = []
for line in f:
line = line.rstrip()
if line:
level = 0
while line.startswith("\t"):
line = line[1:]
level += 1
print (level, line, file=sys.stderr)
node = {
'text': line,
'level': level,
'children': []
}
while len(stack) > level:
stack.pop()
if len(stack):
stack[len(stack)-1]['children'].append(node)
stack.append(node)
return stack[0]
if __name__ == "__main__":
n = process(sys.stdin)
import json
print (json.dumps(n, indent=2))

@ -0,0 +1,76 @@
@font-face {
font-family: "Libertinage x";
src: url("fonts/Libertinage-x.ttf");
}
@font-face {
font-family: "OSP-DIN";
src: url("fonts/OSP-DIN.ttf");
}
body {
margin: 5em;
font-family: "Libertinage x", serif;
font-size: 1.1em;
color: #2d2020;
background: #f2eee3;
}
#map {
background: #f2eee3 !important;
}
div.tile {
color: #2d2020;
position: absolute;
pointer-events: auto; /* this enables links */
}
div.tile img.imagetile {
position: absolute;
left: 0; top: 0;
z-index: 0;
}
div.tile div.text {
position: absolute;
left: 0; top: 0;
z-index: 1;
font-family: sans-serif;
font-size: 15px;
line-height: 18px;
/*text-shadow: 1px 1px 2px black;*/
padding-right: 10px;
padding-left: 0px;
margin-top: 0px;
}
div.tile div.text p {
margin: 0;
hyphens: auto;
}
div.tile div.text a {
margin: 0;
text-decoration: none;
color: #f2eee3;
background: #ed4e47;
}
div.tile div.text a:hover {}
div.coords {
pointer-events: none;
display: none;
}
.leaflet-overlay-pane {
z-index: 0 !important; /* hack to put the x underneath */
}
p.caption {}
p.caption span.text {
/*background: #444;*/
}
p.caption span.date {
padding-left: 8px;
/*background: #444;*/
/*color: #AAA;*/
}
p.caption a.url {
padding-left: 8px;
/*color: #FF0;*/
}
p.caption a.url:hover {
/*background: #FF0;*/
/*color: black;*/
}

@ -0,0 +1,14 @@
{
"text": "",
"children": [
{ "@include": "about.json" },
{
"text": "<p class=\"caption\"><span class=\"text\">Here you find a frequently updated stream of images and posts reflecting current events in the course.</span></p>",
"@include": "drop.node.json"
},
{
"text": "<p class=\"caption\"><span class=\"text\">In the archive you find a vast collection of final projects from over 10 years of Media Design (including Networked Media and Lens based).</span></p>",
"@include": "archive.json"
}
]
}
Caricamento…
Annulla
Salva