Software experiments for the OuNuPo bookscanner. Part of Special Issue 5.
You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
 
 
 
Castro0o f15cf564bb resolved conflict in README 7 years ago
ocr changed file location of ttssr scripts 7 years ago
src Removed a dependency I was not using 7 years ago
.gitignore cleared the read me file for xpub 7 years ago
HELP-makefile.md cleared the read me file for xpub 7 years ago
Makefile updated erase & replace rule in makefile 7 years ago
README resolved conflict in README 7 years ago

README

# OuNuPo Make
Software experiments for the OuNuPo bookscanner, part of Special Issue 5

https://issue.xpub.nl/05/

https://xpub.nl/


## License

## Authors
Natasha Berting, Angeliki Diakrousi, Joca van der Horst, Alexander Roidl, Alice Strete and Zalán Szakács.


## Clone Repository
`git clone https://git.xpub.nl/repos/OuNuPo-make.git`


## General depencies
* Python3
* GNU make
* Python3 NLTK  `pip3 install nltk`
* NLTK English Corpus:
    * run NLTK downloader `python -m nltk.downloader`
    * select menu "Corpora"
    * select "stopwords"
    * "Dowload"



# Make commands

## N+7 (example) Author
Description: Replaces every word with the 7th next word in a dictionary.

run: `make N+7`

Specific Dependencies:
* a
* b
* c


## Sitting inside a pocket(sphinx): Angeliki
Description: Speech recognition feedback loops using the first sentence of a scanned text as input

run: `make ttssr-human-only`

Specific Dependencies:

* PocketSphinx pacakge `sudo aptitude install pocketsphinx pocketsphinx-en-us`

* Speech Recognition: `sudo pip3 install SpeechRecognition`
* TermColor: `sudo pip3 install termcolor`
* PyAudio: `pip3 install pyaudio` 


## Reading the Structure: Joca
Description: Uses OCR'ed text as an input, labels each word for Part-of-Speech, stopwords and sentiment. Then it generates a reading interface
where words with a specific label are hidden. Output can be saved as poster, or exported as json featuring the full data set.

run: `make output/reading_structure/index.html`

Specific Dependencies:
* nltk: nltk.tokenize.punkt, ne_chunk, pos_tag, word_tokenize, sentiment.vader
* weasyprint
* jinja2
* font: PT Sans (os font https://www.fontsquirrel.com/fonts/pt-serif)
* font: Ubuntu Mono (os font https://www.fontsquirrel.com/fonts/ubuntu-mono)