Git repository for the mini-site of Special Issue 5
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

index.html 24KB

  1. <!DOCTYPE html>
  2. <html>
  3. <head>
  4. <title>Special Issue 5: OuNuPo</title>
  5. <script type="text/javascript" src="jquery-3.3.1.min.js"></script>
  6. <script type="text/javascript" src="images-mwapi.js"></script>
  7. <meta charset="utf-8">
  8. <meta name="description" content="" />
  9. <meta name="keywords" content="Piet Zwart Institute, Experimental Publishing, XPUB, OuNuPo, book, scanning, software, algorithmic, Manetta Berends, Cristina Cochior, Varia, WORM Pirate Bay, DIY, book scanning, feminist, research, constraint writing, literature, text, digitisation, processing, OuLiPo" />
  10. <meta name="author" content="Experimental Publishing " />
  11. <meta name="application-name" content="Experimental Publishing - Special Issue #5 OuNuPo" />
  12. <!-- for Facebook opengraph og: -->
  13. <meta property="og:title" content="Special Issue 5 - OuNuPo - XPUB"/>
  14. <meta property="og:type" content="website"/>
  15. <meta property="og:locale" content="en_US"/>
  16. <meta property="og:site_name" content="Experimental Publishing - Special Issue #5 OuNuPo" />
  17. <meta content="" property="og:image">
  18. <meta property="og:url" content=""/>
  19. <meta property="og:description" content="In the Ouvroir de Numérisation Potentielle (the workshop of potential digitisation, or OuNuPo) the XPUB practitioners reflected on several topics: how culture is shaped by book scanning? Who has access and who is excluded from digital culture? How free software and open source hardware have bootstrapped a new culture of librarians? What happens to text when it becomes data that can be transformed, manipulated and analysed ad nauseam? To answer these questions, the XPUB practitioners have written software, built a bookscanner and assembled a unique printed reader." />
  20. <link rel="stylesheet" href="style.css" type="text/css" media="screen" />
  21. </head>
  22. <script type="application/json" class="js-hypothesis-config">
  23. {"showHighlights": false}
  24. </script>
  25. <script src="" async></script>
  26. <body>
  27. <div class="background" >
  28. </div>
  29. <div class="content" >
  30. <h1>Special Issue 5 - OuNuPo</h1>
  31. <div><video width="100%" controls>
  32. <source src="" type="video/mp4">
  33. Your browser does not support the video tag :(
  34. Try a recent version of Firefox or Chromium!
  35. </video></div>
  36. <br><br><br>
  37. <div class="image">
  38. </div>
  39. <p>XPUB, Varia and WORM invite you for an evening of book scanning, short presentations, discussions and software experiments in the context of text digitisation and processing.
  40. 28/03/18 - 19:00 at WORM<p>
  41. <h2>OuNuPo, Ouvroir de Numérisation Potentielle, the workshop of potential digitisation</h2>
  42. <img src="images/try_scanning_loop.gif" alt="scanning" style="width: 50%;float: right;">
  43. <p>From January until the end of March 2018 the practitioners of the Media Design Experimental Publishing Master course (XPUB) of the Piet Zwart Institute, in collaboration with Manetta Berends & Cristina Cochior (Varia) and the WORM Pirate Bay, have set sail on the vast sea of DIY book scanning, feminist research methodologies, constraint writing, algorithmic literature and the cultures of text digitisation and processing.</p>
  44. <p>The term OuNuPo is derived from OuLiPo (Ouvroir de littérature potentielle), founded in 1960. OuLiPo is a mostly French speaking gathering of writers and mathematicians interested in constrained writing techniques. A famous technique is for instance the lipogram that generates texts in which one or more letters have been excluded. OuLiPo eventually led to OuXPo to expand these creative constraints to other practices (OuCiPo for film making, OuPeinPo for painting, etc). Following this expansion, XPUB launches OuNuPo, Ouvroir de Numérisation Potentielle, the workshop of potential digitisation, turning the book scanner as a platform for media design and publishing experiments.</p>
  45. <p>In the past three months, the XPUB practitioners have used OuNuPo as a means to reflect on several topics: how culture is shaped by book scanning? Who has access and who is excluded from digital culture? How free software and open source hardware have bootstrapped a new culture of librarians? What happens to text when it becomes data that can be transformed, manipulated and analysed ad nauseam?</p>
  46. <p>To answer these questions, the XPUB practitioners have written software and assembled a unique printed reader, informed by critical and feminist research methodologies. The text selection explores the themes of the digital transfer of cultural biases, Techno/Cyber/Xeno-Feminism, oral culture in the context of knowledge sharing, shadow libraries, database narratives, gender and future librarians. The content of the reader will be scanned by a DIY book scanner built in the past months, and processed by different software processes and performances written by the XPUB practitioners, from chat bots to concrete poetry generators and speech recognition feedback loops.</p>
  47. <h2>Inside the workshop of potential digitisation:</h2>
  48. <p>To approach the workshop of potential digitisation, the following strategy was adopted: two book scanners were built using a variation of the Archivist Book Scanner, a 2014 public domain (CC0 licensed) hardware design developed within the DIY Book Scanner community. Next to that, a unique reader was put together in the form of 6 books on scanning cultures, edited, designed and produced by the XPUB practitioners. Each book is a compilation of 5 to 10 annotated texts addressing a specific question, or topic, relevant to the practitioners. The 6 books are gathered inside a cloth, folded according to the Japanese Furoshiki art of wrapping. Finally, instead of using the book scanner as a mere text scanning and PDF creating apparatus, each XPUB practitioners wrote their own text processing software to echo, reflect upon, or explore further their reading material as a means to articulate through code the two levels of textual interpretation and dissemination: the human and the machine. Using the book scanner and the software they wrote, they will scan and make public the reader, not as one-to-one digital copy like a downloadable PDF file, but as the output of a series of software experiments.</p>
  49. <h3>Chapter 1 - Alice Strete</h3>
  50. <em>Techno/Cyber/Xeno-Feminism + carlandre & overunder</em>
  51. <pre>
  52. output/carlandre.txt: ocr/output.txt
  53. cat $< | python3 src/ > $(@)
  54. output/overunder: ocr/output.txt
  55. python3 src/
  56. </pre>
  57. <img src="images/Xeno.jpg" width="80%" />
  58. <p>The Intimate and Possibly Subversive Relationship Between Women and Machines Reader explores topics from women's introduction into the technological workforce, the connection between weaving and programming, and using technology in favour of the feminist movement. One major concept that appears throughout the reader is an almost mystical connection between women and software writing, embedded deep in women's tradition of weaving not just threads, but networks. Does software have a gender?</p>
  59. <img src="images/22.png" width="80%" />
  60. <p>Echoing to her selection of texts, Alice proposes two software based transformation of her reader: carlandre and overunder. carlandre is a program that generates a pattern inspired by the concrete poetry of Carl Andre, it creates a vertical wave of words whose lengths go from ascending to descending and so on. overunder is inspired by the relationship between weaving and programming, this interpreted language written in Python translates simple weaving instructions into a digital interpretation of weaving on text.</p>
  61. <h3>Chapter 2 - Joca van der Horst</h3>
  62. <em>Who is the Librarian + Reading the Structure</em>
  63. <pre>
  64. reading_structure: ocr/output.txt
  65. ## Analyzes OCR'ed text using a Part of Speech (POS) tagger. Outputs a string of tags (e.g. nouns, verbs, adjectives, and adverbs). Dependencies: python3's nltk, jinja2, weasyprint
  66. mkdir -p output/reading_structure
  67. cp src/reading_structure/jquery.min.js output/reading_structure
  68. cp src/reading_structure/script.js output/reading_structure
  69. cp src/reading_structure/style.css output/reading_structure
  70. cat $&lt; | python3 src/reading_structure/
  71. weasyprint -s src/reading_structure/print-noun.css output/reading_structure/index.html output/reading_structure/poster_noun.pdf
  72. weasyprint -s src/reading_structure/print-adv.css output/reading_structure/index.html output/reading_structure/poster_adv.pdf
  73. weasyprint -s src/reading_structure/print-dppt.css output/reading_structure/index.html output/reading_structure/poster_dppt.pdf
  74. weasyprint -s src/reading_structure/print-stopword.css output/reading_structure/index.html output/reading_structure/poster_stopword.pdf
  75. weasyprint -s src/reading_structure/print-neutral.css output/reading_structure/index.html output/reading_structure/poster_neutral.pdf
  76. weasyprint -s src/reading_structure/print-entity.css output/reading_structure/index.html output/reading_structure/poster_named_entities.pdf
  77. x-www-browser output/reading_structure/index.html
  78. </pre>
  79. <img src="images/800px-Reader_joca_inside.jpg" width="80%" />
  80. <p>With Who is the Librarian: The gendered image of the librarian and the information scientist, Joca explores two frequent gender stereotypes: librarianship as a job for women and information science as a male-dominated field. The selection of texts in this reader elaborates on the origin of these stereotypes and the different social status of these professions. This could be the way to answer the question: Who do we want to be the librarian in the future?</p>
  81. <img src="images/Reading_structure_screen_interface.png" width="80%" />
  82. <p>Then moving from human interpretation to software interpretation, Joca presents a software, Reading the Structure, that attempts to make visible to human readers how machines, or to be more precise, specific software implementation of text analysis, interpret texts. Computers read a text differently than we do. One of the common methods for software to analyse a text, is to cut the sentences into loose words. Then each word can be labelled for importance, sentiment, or its function in the sentence. During this process of structuring the text, the relation with the original text fades away. Reading the Structure is a reading interface that brings the labels back in the original text. Does that makes us, mere humans, able to read like our machines do?</p>
  83. <h3>Chapter 3 - Zalán Szakács</h3>
  84. <em>From DIY Book Scanning to the Shadow Librarian + ACCP - Analogue Circular Communication Protocol</em>
  85. <img src="images/Screen_Shot_2018-03-24_at_12.44.38.png" width="80%" />
  86. <p>Zalán's reader, From DIY Book Scanning to the Shadow Librarian, traces back the beginnings of the shadow libraries starting from the Soviet era of Russia and explores its impact on contemporary academic publishing. Amongst other things, the text selection Informs the reader about activists in this field such as Aaron Swartz, the writer of Guerilla Open Access Manifesto and Alexandra Elbakyan, the founder of Sci-Hub.</p>
  87. <img src="images/Manifesto_a_1_small.gif" width="80%" />
  88. <p>Where does the message start? Where does the message end? The user is challenged by the coding tool ACCP to discover the rules behind the circular decoding system and decipher the message. Through the programming language Python and the software DrawBot, words are processed and mapped into a spatial graphical system with the 26 characters of the alphabet and the 10 numbers are arranged around a circle. With a radial stencil placed in front of the graphics, it is possible to turn the images back into words.</p>
  89. <h3>Chapter 4 - Natasha Berting</h3>
  90. <em>How Bias Spreads from the Canon to the Web + Erase / Replace</em>
  91. <pre>
  92. erase: tiffs hocrs
  93. python3 src/
  94. rm $(input-hocr)
  95. rm $(images-tiff)
  96. replace:tiffs hocrs
  97. python3 src/
  98. rm $(input-hocr)
  99. rm $(images-tiff)
  100. </pre>
  101. <img src="images/Reader-001.jpg" width="80%" />
  102. <p>Natasha's contribution explores the politics of selection, transparency and as Johanna Drucker said, "calls attention to the made-ness of knowledge". Her selection of texts explores how human biases and cultural blind spots are transferred from the page to the screen, as companies like Google turn books into databases, bags of words into training sets, and use them in ways that are not always clearly communicated.</p>
  103. <img src="images/Delete1.png" width="80%" />
  104. <p>The texts will be processed by the Erase / Replace scripts, which are two experiments that question who and what is included or excluded in book scanning. In each script, what is first scanned affects what is visible and what is hidden in what is scanned in a second stage, so on so forth. The scripts learn each page's vocabulary and favours the most common words. The least common word recede further and further away from view, finally disappearing all together or even replaced by the more common words. Every scan session results in a different distortion, and outputs the original scanned image, but with the text manipulated.</p>
  105. <p>Ultimately these texts and scripts are tools for thinking about how knowledge is mined and presented online, how bias spreads from the Canon to the web, finding opportunities to break open this process.</p>
  106. <h3>Chapter 5 - Alexander Roidl</h3>
  107. <em>Scanning the Database + chatbook</em>
  108. <pre>
  109. chatbook: ocr/output.txt
  110. python3 src/
  111. oulibot: ocr/output.txt #chatbot based on the knowledge of the scans Dependencies: nltk_rake, irc, nltk
  112. python3 src/
  113. </pre>
  114. <img src="images/IMG_6781.JPG" width="80%" />
  115. <p>In Scanning the database, Alexander offers to navigate in and out of database narratives. His reader looks at how databases are structured and formed, and how the data they hold are classified, and how such structuring and classification leads to bias. It shows how important it is to question the authoritative dimension of databases, by looking closely at what is being scanned, how it is stored, organized and selected. </p>
  116. <img src="images/Screenshot_from_2018-03-25_00-19-24.png" width="80%" />
  117. <p>In response to these questions, Alex proposes an alternative interface to such database, by creating a chat bot that enables the user / viewer to explore the content of scanned material by chatting with the books canner. By adding an explicit layer of software mediation, the experiment questions how knowledge is built and mediated in the age of machine learning.</p>
  118. <h3>Chapter 6 - Angeliki Diakrousi</h3>
  119. <em>From Tedious Tasks to Liberating Orality + ttssr-&gt;> Reading and speech recognition in loop</em>
  120. <pre>
  121. ttssr-human-only: ocr/output.txt
  122. bash src/ ocr/output.txt</pre>
  123. <img src="images/DSC5797.jpg" width="80%" />
  124. <img src="images/Ttssr-algologs.png" width="80%" />
  125. <p>Angeliki's collection of texts From Tedious Tasks to Liberating Orality- Practices of the Excluded on Sharing Knowledge, refers to oral culture in relation to programming, as a way of sharing knowledge including our individually embodied position and voice. The emphasis on the role of personal positioning is often supported by feminist theorists. Similarly, and in contrast to scanning, reading out loud is a way of distributing knowledge in a shared space with other people, and this is the core principle behind the ttssr-&gt; Reading and speech recognition in loop software. Using speech recognition software and python scripts Angeliki proposes to the audience to participate in a system that highlights how each voice bears the personal story of an individual. In this case the involvement of a machine provides another layer of reflection of the reading process.</p>
  126. <h2>Credits</h2>
  127. <p>OuNuPo was produced as part of a collaboration between XPUB and WORM. The project was developed by the XPUB practitioners (Natasha Berting, Angeliki Diakrousi, Joca van der Horst, Alexander Roidl, Alice Strete and Zalán Szakács) with the support from Varia special guests (Manetta Berends and Cristina Cochior), the WORM Pirate Bay (Wojtek Szustak and Frederic Van de Velde), (Mark Van den Borre) and XPUB staff and tutors (Delphine Bedel, André Castro, Aymeric Mansoux, Michael Murtaugh, Leslie Robbins and Steve Rushton).</p>
  128. <div class="seperator">
  129. <p>------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------</p>
  130. </div>
  131. <h2>OuNuPo-Make Code Repository</h2>
  132. <h1 id="ounupo-make">OuNuPo Make</h1>
  133. <p>Software experiments for the OuNuPo bookscanner, part of Special Issue 5</p>
  134. <p><a href="" class="uri"></a></p>
  135. <h2 id="authors">Authors</h2>
  136. <p>Natasha Berting, Angeliki Diakrousi, Joca van der Horst, Alexander Roidl, Alice Strete and Zalán Szakács.</p>
  137. <h2 id="clone-repository">Clone Repository</h2>
  138. <p><code>git clone</code></p>
  139. <h2 id="general-depencies">General depencies</h2>
  140. <ul>
  141. <li>Python3</li>
  142. <li>GNU make</li>
  143. <li>Python3 NLTK <code>pip3 install nltk</code></li>
  144. </ul>
  145. <h1 id="make-commands">Make commands</h1>
  146. <h2 id="sitting-inside-a-pocketsphinx-angeliki">Sitting inside a pocket(sphinx): Angeliki</h2>
  147. <p>Speech recognition feedback loops using the first sentence of a scanned text as input</p>
  148. <p>run: <code>make ttssr-human-only</code></p>
  149. <p>Specific Dependencies:</p>
  150. <ul>
  151. <li>PocketSphinx package <code>sudo aptitude install pocketsphinx pocketsphinx-en-us</code></li>
  152. <li>PocketSphinx Python library: <code>sudo pip3 install PocketSphinx</code></li>
  153. <li>Other software packages:<code>sudo apt-get install gcc automake autoconf libtool bison swig python-dev libpulse-dev</code></li>
  154. <li>Speech Recognition Python library: <code>sudo pip3 install SpeechRecognition</code></li>
  155. <li>TermColor Python library: <code>sudo pip3 install termcolor</code></li>
  156. <li>PyAudio Python library: <code>sudo pip3 install pyaudio</code></li>
  157. </ul>
  158. <h3 id="licenses">Licenses:</h3>
  159. <p>© 2018 WTFPL – Do What the Fuck You Want to Public License. © 2018 BSD 3-Clause – Berkeley Software Distribution</p>
  160. <h2 id="reading-the-structure-joca">Reading the Structure: Joca</h2>
  161. <p>Uses OCR'ed text as an input, labels each word for Part-of-Speech, stopwords and sentiment. Then it generates a reading interface where words with a specific label are hidden. Output can be saved as poster, or exported as json featuring the full data set.</p>
  162. <p>Run: <code>make reading_structure</code></p>
  163. <p>Specific Dependencies:</p>
  164. <ul>
  165. <li><a href="">NLTK</a> packages: tokenize.punkt, pos_tag, word_tokenize, sentiment.vader, vader_lexicon (python3; import nltk; and select these models)</li>
  166. <li><a href="">spaCy</a> Python library</li>
  167. <li>spacy: en_core_web_sm model (python3 -m spacy download en_core_web_sm)</li>
  168. <li><a href="">weasyprint</a></li>
  169. <li><a href="">jinja2</a></li>
  170. <li>font: <a href="">PT Sans</a></li>
  171. <li>font: <a href="">Ubuntu Mono</a></li>
  172. </ul>
  173. <h3 id="license-gnu-agplv3">License: GNU AGPLv3</h3>
  174. <p>Permissions of this license are conditioned on making available complete source code of licensed works and modifications, which include larger works using a licensed work, under the same license. Copyright and license notices must be preserved. Contributors provide an express grant of patent rights. When a modified version is used to provide a service over a network, the complete source code of the modified version must be made available. See src/reading_structure/license.txt for the full license.</p>
  175. <h2 id="erase-replace-natasha">Erase / Replace: Natasha</h2>
  176. <p>Receives your scanned pages in order, then analyzes each image and its vocabulary. Finds and crops the least common words, and either erases them, or replaces them with the most common words. Outputs a PDF of increasingly distorted scan images.</p>
  177. <p>For erase script run: <code>make erase</code></p>
  178. <p>For replace script run: <code>make replace</code></p>
  179. <p>Specific Dependencies:</p>
  180. <ul>
  181. <li>NLTK English Corpus:
  182. <ul>
  183. <li>run NLTK downloader <code>python -m nltk.downloader</code></li>
  184. <li>select menu &quot;Corpora&quot;</li>
  185. <li>select &quot;stopwords&quot;</li>
  186. <li>&quot;Download&quot;</li>
  187. </ul></li>
  188. <li>Python Image Library (PIL): <code>pip3 install Pillow</code></li>
  189. <li>PDF generation for Python (FPDF): <code>pip3 install fpdf</code></li>
  190. <li>HTML5lib Python Library: <code>pip3 install html5lib</code></li>
  191. </ul>
  192. <h3 id="notes-bugs">Notes &amp; Bugs:</h3>
  193. <p>This script is very picky about the input images it can work with. For best results, please use high resolution images in RGB colorspace. Errors can occur when image modes do not match or tesseract cannot successfully make HOCR files.</p>
  194. <h2 id="carlandre-overunder-alice-strete">carlandre &amp; over/under: Alice Strete</h2>
  195. <p>Person who aspires to call herself a software artist sometime next year.</p>
  196. <h3 id="license">License:</h3>
  197. <p>Copyright © 2018 Alice Strete This work is free. You can redistribute it and/or modify it under the terms of the Do What The Fuck You Want To Public License, Version 2, as published by Sam Hocevar. See for more details.</p>
  198. <h3 id="dependencies">Dependencies:</h3>
  199. <ul>
  200. <li><a href="">pytest</a></li>
  201. </ul>
  202. <p>Programs:</p>
  203. <h3 id="carlandre">carlandre</h3>
  204. <p>Description: Generates concrete poetry from a text file. If you're connected to a printer located in /dev/usb/lp0 you can print the poem.</p>
  205. <p>run: <code>make carlandre</code></p>
  206. <h3 id="overunder">over/under</h3>
  207. <p>Description: Interpreted programming language written in Python3 which translates basic weaving instructions into code and applies them to text.</p>
  208. <p>run: <code>make overunder</code></p>
  209. <h3 id="instructions">Instructions:</h3>
  210. <ul>
  211. <li>over/under works with specific commands which execute specific instructions.</li>
  212. <li>When running, an interpreter will open: <code>&gt;</code></li>
  213. <li>To load your text, type 'load'. This is necessary before any other instructions. Every time you load the text, the previous instructions will be discarded.</li>
  214. <li>To see the line you are currently on, type 'show'.</li>
  215. <li>To start your pattern, type 'over' or 'under', each followed by an integer, separated by a comma. e.g. over 5, under 5, over 6, under 10</li>
  216. <li>To move on to the next line of text, press enter twice.</li>
  217. <li>To see your pattern, type 'pattern'.</li>
  218. <li>To save your pattern in a text file, type 'save'.</li>
  219. <li>To leave the program, type 'quit'.</li>
  220. </ul>
  221. <h2 id="oulibot-alex">oulibot: Alex</h2>
  222. <p>Description: Chatbot that will help you to write a poem based on the text you inserted by giving you constraints.</p>
  223. <p>run: <code>make oulibot</code></p>
  224. <h4 id="dependencies-1">Dependencies:</h4>
  225. <p>Python libraries:</p>
  226. <ul>
  227. <li>irc : <code>pip3 install irc</code></li>
  228. <li>rake_nltk Python library: <code>pip3 install rake_nltk</code></li>
  229. <li>textblob: <code>pip3 install textblob</code></li>
  230. <li>PIL: <code>pip3 install Pillow</code></li>
  231. <li>numpy: <code>pip3 install numpy</code></li>
  232. <li>tweepy: <code>pip3 install tweepy</code></li>
  233. <li>NLTK stopwords:
  234. <ul>
  235. <li>run NLTK downloader <code>python -m nltk.downloader</code></li>
  236. <li>select menu &quot;Corpora&quot;</li>
  237. <li>select &quot;stopwords&quot;</li>
  238. <li>&quot;Download&quot;</li>
  239. </ul></li>
  240. </ul>
  241. </div>
  242. </body>
  243. </html>