|
|
@ -0,0 +1,379 @@
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cells": [
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {
|
|
|
|
|
|
|
|
"slideshow": {
|
|
|
|
|
|
|
|
"slide_type": "slide"
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"# DJ Dataset\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"Goal: Produce some audio \"digital deconstructions\" based on training samples from the the Common Voice dataset. In the process, explore what a **dataset** is and how it relates to the technique of **Deep Learning** (and situate this term in a larger context). Consider how artistic interventions can go beyond **using** a novel technique, to (also) \"talking back\" to these technologies and work on a **critical / reflective level**."
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {
|
|
|
|
|
|
|
|
"slideshow": {
|
|
|
|
|
|
|
|
"slide_type": "slide"
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## Common Voice\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> Common Voice is part of Mozilla's initiative to help teach machines how real people speak. In addition to the Common Voice dataset, we’re also building an open source speech recognition engine called Deep Speech.\n",
|
|
|
|
|
|
|
|
"> Both of these projects are part of our efforts to bridge the digital speech divide. Voice recognition technologies bring a human dimension to our devices, but developers need an enormous amount of voice data to build them. Currently, most of that data is expensive and proprietary. We want to make voice data freely and publicly available, and make sure the data represents the diversity of real people. Together we can make voice recognition better for everyone.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"<https://commonvoice.mozilla.org/en/about>\n"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## DeepSpeech\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers. \n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> DeepSpeech is an open-source Speech-To-Text engine, using a model trained by machine learning techniques based on Baidu's Deep Speech research paper. Project DeepSpeech uses Google's TensorFlow to make the implementation easier.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"<https://github.com/mozilla/DeepSpeech>"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## Training Data\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> Large-scale deep learning systems require an abundance of labeled data. For our system we need many recorded utterances and corresponding English transcriptions, but there are few public datasets of sufficient scale. To train our largest models we have thus collected an extensive dataset consisting of 5000 hours of read speech from 9600 speakers. For comparison, we have summarized the labeled datasets available to us in Table 2.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"[Deep Speech: Scaling up end-to-end speech recognition](https://arxiv.org/abs/1412.5567)"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## Speech Corpora\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> We started by downloading freely available speech corpora like [TED-LIUM](http://www.openslr.org/7/) and [LibriSpeech](http://www.openslr.org/12/),, as well as acquiring paid corpora like [Fisher](https://catalog.ldc.upenn.edu/LDC2004S13) and [Switchboard](https://catalog.ldc.upenn.edu/ldc97s62). We wrote importers in Python for the different data sets that convert the audio files to WAV, split the audio and cleaned up the transcription of unneeded characters like punctuation and accents. Finally we stored the preprocessed data in CSV files that can be used to feed data into the network.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"...\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> To build a speech corpus that’s free, open source, and big enough to create meaningful products with, we worked with Mozilla’s Open Innovation team and launched the Common Voice project to collect and validate speech contributions from volunteers all over the world. Today, the team is releasing a large collection of voice data into the public domain. Find out more about the release on the Open Innovation Medium blog.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"[A Journey to <10% Word Error Rate](https://hacks.mozilla.org/2017/11/a-journey-to-10-word-error-rate/)\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"## What is Deep Learning\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> In the past few years, artificial intelligence (AI) has been a subject of intense media hype. Machine learning, deep learning, and AI come up in countless articles, often outside of technology-minded publications. We’re promised a future of intelligent chatbots, self-driving cars, and virtual assistants—a future sometimes painted in a grim light and other times as utopian, where human jobs will be scarce and most economic activity will be handled by robots or AI agents. For a future or current practitioner of machine learning, it’s important to be able to recognize the signalin the noise so that you can tell world-changing developments from overhyped press releases. Our future is at stake, and it’s a future in which you have an activerole to play: after reading this book, you’ll be one of those who develop the AI agents. So let’s tackle these questions: What has deep learning achieved so far?How significant is it? Where are we headed next? Should you believe the hype?\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"<https://hub.xpub.nl/bootleglibrary/book/24>\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"* <https://hub.xpub.nl/bootleglibrary/read/24/pdf#page=27>\n",
|
|
|
|
|
|
|
|
"* Classical vs. ML programming <https://hub.xpub.nl/bootleglibrary/read/24/pdf#page=28>"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## Redlining\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> Redlining gets its name because the practice first involved drawing literal red lines on a map. (Sometimes the areas were shaded red instead, as in the map in figure 2.2.) All of Detroit’s Black neighborhoods fall into red areas on this map because housing discrimination and other forms of structural oppression predated the practice. But denying home loans to the people who lived in these neighborhoods reinforced those existing inequalities and, as decades of research have shown, were directly responsible for making them worse.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"* <https://hub.xpub.nl/bootleglibrary/read/575/pdf#page=63>"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## \"Machine Learning for artists\"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> ml4a is a collection of free educational resources devoted to machine learning for artists.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> It contains an in-progress book which is being written by @genekogan and can be seen in draft form here. Four chapters are complete and others are in varying stages of progress or just stubs containing links.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> The book is complemented by a set of 40+ instructional guides maintained by collaborators, along with interactive demos and figures, and video lectures. \n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"https://ml4a.github.io/guides/"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## Notes for artistic intervention\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"* Beware reinforcing the hype -- Deflate over inflated claims\n",
|
|
|
|
|
|
|
|
"* Track down, look at, make visible, and question the **data sets**\n",
|
|
|
|
|
|
|
|
"* Explore the \"errors\" the model makes\n",
|
|
|
|
|
|
|
|
"* Make the predictive nature of the models more apparent."
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## The Coded Gaze\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> Face detection and classification algorithms are also used by US-based law enforcement for surveillance and crime prevention purposes. In “The Perpetual Lineup”, Garvie and colleagues provide an in-depth analysis of the unregulated police use of face recognition and call for rigorous standards of automated facial analysis, racial ac- curacy testing, and regularly informing the pub- lic about the use of such technology (Garvie et al., 2016). Past research has also shown that the accuracies of face recognition systems used by US-based law enforcement are systematically lower for people labeled female, Black, or be- tween the ages of 18—30 than for other demo- graphic cohorts (Klare et al., 2012). The latest gender classification report from the National In- stitute for Standards and Technology (NIST) also shows that algorithms NIST evaluated performed worse for female-labeled faces than male-labeled faces (Ngan et al., 2015).\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> Buolamwini, J., Gebru, T. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81:1-15, 2018 Conference on Fairness, Accountability, and Transparency\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"* <https://youtu.be/162VzSzzoPs>\n",
|
|
|
|
|
|
|
|
"* <https://www.ajlunited.org/>\n",
|
|
|
|
|
|
|
|
"* <http://gendershades.org/>\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"## FER2013\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"> These models are not “pure algorithms”, but in fact are the product of a being trained with thousands of examples, images that have been given labels like “happy, neutral, angry, and disgusted”. FER2013 itself is a troubled archive, created by university students for a computer science competition, which stipulated that the images would not be part of an already existing collection. As a result, the researchers used Google image search to perform automated searches to produce the collection. But who has made these subjective judgments? To answer why exactly is it that, among the 30,000 collected images, a photo of actor Samuel L. Jackson appears among the examples of “angry” is complex. When producing an interpretation of a new image, the data model reflects the training data and how and who created it. In this work, we wanted to draw a parallel between contemporary data and surveillance practices with those of the colonial photographic projects Antje was critiquing.\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"![](img/FER_Angry_Obama.1024x.jpg)\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"<https://www.roots-routes.org/troubled-archives-the-story-of-how-an-individual-artistic-research-into-archives-becomes-a-collective-and-at-times-community-driven-project/>\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"## Notes\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"* [Adam Harvey](https://ahprojects.com/)\n",
|
|
|
|
|
|
|
|
"* [Recognition Machine](https://recognitionmachine.vandal.ist/)\n",
|
|
|
|
|
|
|
|
"* <https://recognitionmachine.vandal.ist/media/regimes_of_surveillance/> rough notes\n",
|
|
|
|
|
|
|
|
"* <https://www.callingbullshit.org/case_studies/case_study_ml_sexual_orientation.html>\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"FORUM POST showing precarity of the project\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"* Challenge to the idea of \"digital divide\": [Andre Brock: Distributed Blackness: African American Cybercultures](https://hub.xpub.nl/bootleglibrary/book/603)"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## public_url"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"from urllib.parse import urljoin, quote as urlquote\n",
|
|
|
|
|
|
|
|
"import os\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"def get_public_url():\n",
|
|
|
|
|
|
|
|
" \"\"\" assumes you are inside a subfolder of your public_html folder \"\"\"\n",
|
|
|
|
|
|
|
|
" user = os.environ.get(\"USER\")\n",
|
|
|
|
|
|
|
|
" rel_pwd = (os.path.relpath(os.getcwd(),os.path.expanduser(\"~/public_html\")))\n",
|
|
|
|
|
|
|
|
" return f\"https://hub.xpub.nl/sandbot/~{user}/{urlquote(rel_pwd)}/\"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"public_url = get_public_url()\n",
|
|
|
|
|
|
|
|
"print (public_url)"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## Reading/Filtering the TSV files\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes\n"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"import csv\n",
|
|
|
|
|
|
|
|
"import json\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"XPUB_URL = \"https://xpub.nl/data/cv-corpus-6.1-singleword/nl/clips/\"\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"want = (\"één\", \"twee\", \"drie\", \"vier\")\n",
|
|
|
|
|
|
|
|
"out = {}\n",
|
|
|
|
|
|
|
|
"with open(\"commonvoice/cv-corpus-6.1-singleword/nl/train.tsv\") as fin:\n",
|
|
|
|
|
|
|
|
" for row in csv.DictReader(fin, delimiter=\"\\t\"):\n",
|
|
|
|
|
|
|
|
" s = row['sentence']\n",
|
|
|
|
|
|
|
|
" if row['sentence'] in want:\n",
|
|
|
|
|
|
|
|
" # print (f\"{row['path']} {row['sentence']} {row['gender']}\")\n",
|
|
|
|
|
|
|
|
" print (f\"{row['sentence']} {XPUB_URL}{row['path']}\")\n",
|
|
|
|
|
|
|
|
" if s not in out:\n",
|
|
|
|
|
|
|
|
" out[s] = []\n",
|
|
|
|
|
|
|
|
" out[s].append(row)\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"with open(\"counting_nl.json\", \"w\") as fout:\n",
|
|
|
|
|
|
|
|
" print (json.dumps(out, indent=2), file=fout)"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"Check the [output](counting_nl.json)"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"## Pass II: download, transcode, trim clips\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"https://xpub.nl/data/cv-corpus-6.1-singleword/nl/clips/ \n",
|
|
|
|
|
|
|
|
"https://recognitionmachine.vandal.ist/media/datasets/cv-corpus-6.1-singleword/"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"# can ffmpeg transcode from a URL ?!"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"!ffmpeg -i https://recognitionmachine.vandal.ist/media/datasets/cv-corpus-6.1-singleword/nl/clips/common_voice_nl_21654623.mp3 -y test.mp3"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "markdown",
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"https://digitalcardboard.com/blog/2009/08/25/the-sox-of-silence/"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"!sox test.wav test_trim.wav silence 1 0.1 1%"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"!sox test.wav test_trim.wav silence 1 0.1 1% -1 0.1 1%"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"!ffmpeg -i test_trim.wav test_trim.mp3"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"with open(\"counting_nl.json\") as f:\n",
|
|
|
|
|
|
|
|
" data = json.load(f)\n",
|
|
|
|
|
|
|
|
" for key in data:\n",
|
|
|
|
|
|
|
|
" print (f\"key: {key}\")\n",
|
|
|
|
|
|
|
|
" for item in data[key][:5]:\n",
|
|
|
|
|
|
|
|
" print (item['sentence'])"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"import os\n",
|
|
|
|
|
|
|
|
"from unidecode import unidecode\n",
|
|
|
|
|
|
|
|
"\n",
|
|
|
|
|
|
|
|
"ruby_data = {}\n",
|
|
|
|
|
|
|
|
"with open(\"counting_nl.json\") as f:\n",
|
|
|
|
|
|
|
|
" data = json.load(f)\n",
|
|
|
|
|
|
|
|
" for key in data:\n",
|
|
|
|
|
|
|
|
" print (f\"key: {key}\")\n",
|
|
|
|
|
|
|
|
" ruby_data[unidecode(key)] = []\n",
|
|
|
|
|
|
|
|
" for item in data[key][:5]:\n",
|
|
|
|
|
|
|
|
" url = f\"https://xpub.nl/data/cv-corpus-6.1-singleword/nl/clips/{item['path']}\"\n",
|
|
|
|
|
|
|
|
" print (url, item['sentence'])\n",
|
|
|
|
|
|
|
|
" mp3 = \"counting_nl/\" + item['path']\n",
|
|
|
|
|
|
|
|
" wav = mp3.replace(\".mp3\", \".wav\")\n",
|
|
|
|
|
|
|
|
" if not os.path.exists(wav):\n",
|
|
|
|
|
|
|
|
" os.system(\"mkdir -p counting_nl\")\n",
|
|
|
|
|
|
|
|
" os.system(f\"ffmpeg -i {url} -y tmp.wav\")\n",
|
|
|
|
|
|
|
|
" os.system(\"sox tmp.wav tmp_trim.wav silence 1 0.1 1% -1 0.1 1%\")\n",
|
|
|
|
|
|
|
|
" os.system(f\"ffmpeg -i tmp_trim.wav {mp3}\")\n",
|
|
|
|
|
|
|
|
" os.system(f\"mv tmp_trim.wav {wav}\")\n",
|
|
|
|
|
|
|
|
" os.system(\"rm tmp.wav\")\n",
|
|
|
|
|
|
|
|
" print (f\"{public_url}{mp3}\")\n",
|
|
|
|
|
|
|
|
" ruby_data[unidecode(key)].append(item['path'].replace(\".mp3\", \".wav\"))\n",
|
|
|
|
|
|
|
|
" # print (item['sentence'])\n",
|
|
|
|
|
|
|
|
"with open(\"counting_nl.ruby.json\", \"w\") as fout:\n",
|
|
|
|
|
|
|
|
" print (json.dumps(ruby_data, indent=2), file=fout)\n"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": [
|
|
|
|
|
|
|
|
"!zip -r counting_nl.zip counting_nl"
|
|
|
|
|
|
|
|
]
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
{
|
|
|
|
|
|
|
|
"cell_type": "code",
|
|
|
|
|
|
|
|
"execution_count": null,
|
|
|
|
|
|
|
|
"metadata": {},
|
|
|
|
|
|
|
|
"outputs": [],
|
|
|
|
|
|
|
|
"source": []
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
],
|
|
|
|
|
|
|
|
"metadata": {
|
|
|
|
|
|
|
|
"kernelspec": {
|
|
|
|
|
|
|
|
"display_name": "Python 3",
|
|
|
|
|
|
|
|
"language": "python",
|
|
|
|
|
|
|
|
"name": "python3"
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
"language_info": {
|
|
|
|
|
|
|
|
"codemirror_mode": {
|
|
|
|
|
|
|
|
"name": "ipython",
|
|
|
|
|
|
|
|
"version": 3
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
"file_extension": ".py",
|
|
|
|
|
|
|
|
"mimetype": "text/x-python",
|
|
|
|
|
|
|
|
"name": "python",
|
|
|
|
|
|
|
|
"nbconvert_exporter": "python",
|
|
|
|
|
|
|
|
"pygments_lexer": "ipython3",
|
|
|
|
|
|
|
|
"version": "3.7.3"
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
"toc-showcode": false,
|
|
|
|
|
|
|
|
"toc-showmarkdowntxt": false,
|
|
|
|
|
|
|
|
"toc-showtags": false
|
|
|
|
|
|
|
|
},
|
|
|
|
|
|
|
|
"nbformat": 4,
|
|
|
|
|
|
|
|
"nbformat_minor": 4
|
|
|
|
|
|
|
|
}
|