You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

809 lines
16 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"https://www.nltk.org/book/\n",
"\n",
"https://www.nltk.org/book/ch00.html#natural-language-toolkit-nltk\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"ename": "ModuleNotFoundError",
"evalue": "No module named 'nltk'",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-1-1d2184025e54>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0;32mimport\u001b[0m \u001b[0mnltk\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'nltk'"
]
}
],
"source": [
"import nltk"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nltk.download(\"book\", download_dir=\"/usr/local/share/nltk_data\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from nltk.book import *"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text1"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"type(text1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from nltk.text import Text"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nltk.text.Text"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for line in text1.concordance_list(\"whale\"):\n",
" print (line.left_print, line.query, line.right_print)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text5.tokens"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Reading Words for the Future texts\n",
"\n",
"[Chapter 3 of the NLTK book](https://www.nltk.org/book/ch03.html) discusses using your own texts using urlopen and the nltk.text.Text class.\n",
"\n",
"We can use [urllib.request.urlopen](https://docs.python.org/3/library/urllib.request.html?highlight=urlopen#urllib.request.urlopen) + pull the \"raw\" URLs of materials from the [SI13 materials on git.xpub.nl](https://git.xpub.nl/XPUB/S13-Words-for-the-Future-materials)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"url = \"https://git.xpub.nl/XPUB/S13-Words-for-the-Future-materials/raw/branch/master/txt-essays/RESURGENCE%20Isabelle%20Stengers.txt\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"url"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from urllib.request import urlopen"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"r = urlopen(url)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"rawtext = r.read()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text = rawtext.decode()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"text = urlopen(url).read().decode()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"words = text.split?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"words = text.split"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"words = text.split"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"words = text.split"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"words = text.split"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"words = text.split()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(words)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from nltk import word_tokenize"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tokens = word_tokenize(text)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(tokens)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(tokens)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tokens[-10:]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers = Text(tokens)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers.concordance(\"the\", width=82, lines=74)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for line in stengers.concordance_list(\"the\", width=82, lines=74):\n",
" print (line.left_print, line.query, line.right_print)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"with open (\"patches/stengers_the.txt\", \"w\") as output:\n",
" for line in stengers.concordance_list(\"the\", width=82, lines=74):\n",
" print (line.left_print, line.query, line.right_print, file=output)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for line in stengers.concordance_list(\"the\", width=82, lines=74):\n",
" print (line.query)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers.concordance(\"the\", width=3)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers.common_contexts([\"power\", \"victims\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers.dispersion_plot([\"power\", \"the\", \"victims\"])"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from nltk.probability import FreqDist"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"freq = FreqDist(stengers)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"freq[\"WHALE\"]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"freq['power']"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"freq.plot(50)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"freq.plot(50, cumulative=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Counting Vocabulary\n",
"\n",
"## Making a function\n",
"Investigating a text as a list of words, we discover that we can compare the count of the total number of words, with the number of unique words. If we compare "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(stengers)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(set(stengers))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def lexical_diversity(text):\n",
" return len(text) / len(set(text))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"lexical_diversity(stengers)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def percentage (count, total):\n",
" return 100 * count / total"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"percentage(4, 5)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"NB: BE CAREFUL RUNNING THE FOLLOWING LINE ... IT'S REALLY SLOW...\n",
"Not all code is equal, and just because two different methods produce the same result\n",
"doesn't mean they're equally usable in practice\n",
"\n",
"Why? because text1 (Moby Dick) is a list\n",
"and checking if (x not in text1)\n",
"has to scan the whole list of words\n",
"AND THEN this scan is done FOR EVERY WORD in the stengers text\n",
"The result is called \"order n squared\" execution, as the number of words in each text increases\n",
"the time to perform the code get EXPONENTIALLY slower\n",
"it's basically the phenomenon of nested loops on large lists.... SSSSSSSSSLLLLLLLLLOOOOOOOOOOOWWWWWWWWWWW"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# stengers_unique = []\n",
"# for word in stengers.tokens:\n",
"# if word not in text1:\n",
"# stengers_unique.append(word)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# stengers_unique = [x for x in stengers.tokens if x not in text1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"FIX: make a set based on the Moby Dick text, checking if something is in a set is VERY FAST compared to scanning a list (Order log(n) instead of n)..."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"moby = set(text1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\"the\" in moby"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Rather than n\\*n (n squared), the following is just n * log(n) which is *not* exponential as n gets big"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers_unique = []\n",
"for word in stengers.tokens:\n",
" if word not in moby:\n",
" stengers_unique.append(word)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The above can also be expressed using the more compact form of a list comprehension"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers_unique = [word for word in stengers.tokens if word not in moby]"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"len(stengers_unique)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers_unique"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers_unique_text = Text(stengers_unique)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"freq = FreqDist(stengers_unique)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"freq.plot(50)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers_unique_text.concordance(\"witches\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Increasing the default figure size"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from IPython.core.pylabtools import figsize"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"figsize(20.0,20.0)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stengers"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Nami asks: How to I get concordances of just words ending \"ity\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"t = stengers"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"ity = []\n",
"for w in stengers:\n",
" if w.endswith(\"ity\"):\n",
" # print (w)\n",
" ity.append(w.lower())\n",
"ity = set(ity)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for word in ity:\n",
" stengers.concordance(word)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"\"Objectivity\".lower"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"set(ity)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Clara asks, what about lines that are shorter than the width you give?\n",
"\n",
"https://www.peterbe.com/plog/how-to-pad-fill-string-by-variable-python\n",
"\n",
"cwidth is how much \"padding\" is needed for each side\n",
"it's our page width - the length of the word divided by 2\n",
"in python means \"integer\" (whole number) division"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"for line in stengers.concordance_list(\"resurgence\", width=82, lines=74):\n",
" cwidth = (82 - len(\"resurgence\")) // 2\n",
" # print (cwidth)\n",
" print ( line.left_print.rjust(cwidth), line.query, line.right_print.ljust(cwidth) )\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}