fixed issue for php call and run command for audio editing

master
Angeliki 5 years ago
parent dc49fa56bc
commit 082b928878

6
.gitignore vendored

@ -4,5 +4,7 @@ texts/thesis/drafts/
uploads/ uploads/
audio/ audio/
images/ images/
scripts/*.wav
scripts/*.mp3
scripts/python-audio-effects/*.wav
scripts/python-audio-effects/*.mp3

@ -0,0 +1,278 @@
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link href="styles/jquery-ui.css" rel="stylesheet" type="text/css">
<link href="styles/radioactive.css" rel="stylesheet" type="text/css">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
<!-- <link href="styles/smallscreen.css" rel="stylesheet" type="text/css"> -->
<title>Radioactive Monstrosities</title>
<link rel="shortcut icon" href="images/headphones_logo.png" />
<style>
h3 {font-weight: normal !important; float: left;}
body {background:none !important;}
</style>
</head>
<body>
<!-- table of voices -->
<h1>Radio-active <div class="tooltip-wrap">Monstrosities<div class="tooltip-content-right" ><div>
<?php
include 'texts-radioactive/female_monstrosity.txt';
?>
</div></div></div></h1>
<!-- <h3>CONTRIBUTORS' VOICES</h3> -->
<table class="radioactive">
<tr>
<th><div class="tooltip-wrap">./collective_voice<div class="tooltip-content-right" >The speaker's voice is channeled through multiple voices and in this case through distorted mediated voices of the same person</div></div></th>
<th><div class="tooltip-wrap">./echo_voice<div class="tooltip-content-right">A voice that is more distant from the speaker, sounding like being
in an outer space inside the medium. This voice resembles sounds from an online call</div></div></th>
<th><div class="tooltip-wrap">./lowpith_voice<div class="tooltip-content-right" >A voice that sounds more male because of lowering its pitch</div></div></th>
<th><div class="tooltip-wrap">./lowpass_voice<div class="tooltip-content-right" >A voice that sounds like "shrill" if it is in high frequencies.
This script doesn't allow high frequencies to pass through</div></div></th>
</tr>
<tr>
<?php
$items=array();
$handle=fopen('./uploads/5/index.jsons','r');
if ($handle) {
while (($line=fgets($handle)) !== false) {
$item=json_decode($line,true);
$items[]=$item;
}
}
$items=array_reverse($items);
echo '<td>';
foreach($items as $item) {
$url=substr($item['file'],3);
if (strpos($item['type'], 'collective') !== false){
echo '<div class=file>';
echo '<audio src='.$url.' controls></audio> ';
echo $item['name'];
echo '</div><br />';
}
}
echo '</td>';
echo '<td>';
foreach($items as $item) {
$url=substr($item['file'],3);
exec("~/virtualenvs/radioactive/bin/python3 scripts/echo.py").$url;
if (strpos($item['type'], 'echo') !== false){
echo '<div class=file>';
echo '<audio src='.$url.' controls></audio> ';
echo $item['name'];
echo '</div><br />';
}
}
echo '</td>';
echo '<td>';
foreach($items as $item) {
$url=substr($item['file'],3);
if (strpos($item['type'], 'lowpitch') !== false){
echo '<div class=file>';
echo '<audio src='.$url.' controls></audio> ';
echo $item['name'];
echo '</div><br />';
}
}
echo '</td>';
echo '<td>';
foreach($items as $item) {
$url=substr($item['file'],3);
if (strpos($item['type'], 'lowpass') !== false){
echo '<div class=file>';
echo '<audio src='.$url.' controls></audio> ';
echo $item['name'];
echo '</div><br />';
}
}
echo '</td>';
?>
</tr>
<!-- recorder -->
<tr>
<td colspan="4">
<div align="center">
<div class="recorder">
<input type="button" class="start" value="Record" />
<input type="button" class="stop" value="Stop" />
<pre class="status"></pre>
</div>
<div id="playerContainer"></div>
<br />
<div><button id="button_upload" onclick="upload()">Upload</button></div>
<br />
<div id="saved_msg"></div>
<div id="dataUrlcontainer" hidden></div>
<pre id="log" hidden></pre>
</div>
<div class="tooltip-wrap"><i class="fa fa-copyright fa-flip-horizontal"></i><div class="tooltip-content-right" ><div>
[Angeliki Diakrousi, Radioactive Monstrosities, 2020. Rotterdam].
Copyleft: This is a free work, you can copy, distribute, and modify it under the terms of the Free Art License
http://artlibre.org/licence/lal/en/
You can choose the type of lisence you want
If you want your full name to appear in the contributors send me your name here: angeliki@genderchangers.org
The copyrights of the voices belong to you. I ask you to use them in the performance radio-active monstrosities where I will narrate ...Only for this recording. You can choose to include your name or not.Your voice recordings will be used and credited accordingly
</div></div></div>
<br>
<div class="tooltip-wrap"><i class="fa fa-file"></i><div class="tooltip-content-right" ><div>
<?php
include 'texts-radioactive/about.txt';
?>
</div></div></div><br>
<div class="tooltip-wrap"><i class="fa fa-gears"></i><div class="tooltip-content-right" ><div>
<?php
include 'texts-radioactive/instructions.txt';
?>
</div></div></div><br>
<a href="https://gitlab.com/nglk/radioactive-web">git</a>
</td>
</tr>
<!-- texts-radioactive and references -->
<tr>
<?php
echo '<td>';
include 'texts-radioactive/voice_collective.txt';
echo '</td>';
echo '<td>';
include 'texts-radioactive/voice_echo.txt';
echo '</td>';
echo '<td>';
include 'texts-radioactive/voice_lowpitch.txt';
echo '</td>';
echo '<td>';
include 'texts-radioactive/voice_lowpass.txt';
echo '</td>';
?>
</tr>
</table>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.0/jquery.min.js"></script>
<script src="js/jquery.min.js"></script>
<script src="js/mp3recorder.js"></script>
<script src="js/draggable.js"></script>
<script src="js/jquery-1.12.4.js"></script>
<script src="js/jquery-ui.js"></script>
<script src="js/jquery.ui.touch-punch.min.js"></script>
<script src="js/main.js"></script>
<!-- scripts for recorder -->
<script>
var audio_context;
function __log(e, data) {
log.innerHTML += "\n" + e + " " + (data || '');
}
$(function() {
try {
// webkit shim
window.AudioContext = window.AudioContext || window.webkitAudioContext;
navigator.getUserMedia = ( navigator.getUserMedia ||
navigator.webkitGetUserMedia ||
navigator.mozGetUserMedia ||
navigator.msGetUserMedia);
window.URL = window.URL || window.webkitURL;
var audio_context = new AudioContext;
__log('Audio context set up.');
__log('navigator.getUserMedia ' + (navigator.getUserMedia ? 'available.' : 'not present!'));
} catch (e) {
alert('No web audio support in this browser!');
}
$('.recorder .start').on('click', function() {
$this = $(this);
$recorder = $this.parent();
navigator.getUserMedia({audio: true}, function(stream) {
var recorderObject = new MP3Recorder(audio_context, stream, { statusContainer: $recorder.find('.status'), statusMethod: 'replace' });
$recorder.data('recorderObject', recorderObject);
recorderObject.start();
}, function(e) { });
});
document.getElementById("button_upload").style.display='none';
$('.recorder .stop').on('click', function() {
$this = $(this);
$recorder = $this.parent();
recorderObject = $recorder.data('recorderObject');
recorderObject.stop();
recorderObject.exportMP3(function(base64_data) {
var url = 'data:audio/mp3;base64,' + base64_data;
var au = document.createElement('audio');
document.getElementById("playerContainer").innerHTML = "";
// console.log(url)
document.getElementById("button_upload").style.display='block';
var duc = document.getElementById("dataUrlcontainer");
duc.innerHTML = url;
au.controls = true;
au.src = url;
//$recorder.append(au);
$('#playerContainer').append(au);
recorderObject.logStatus('');
});
});
});
</script>
<script>
function upload(){
var name = prompt('Enter a title or/and your name','Unnamed clip');
var type = prompt('Enter type of distortion. Choose between collective,echo,lowpitch,lowpass','No Type');
var license = prompt('Enter a license. Choose between collective,echo,lowpitch,lowpass','No Type');
var dataURL = document.getElementById("dataUrlcontainer").innerHTML;
$.ajax({
type: "POST",
url: "scripts/uploadMp3_5.php",
data: {
base64: dataURL,
name: name,
type: type
}
}).done(function(o) {
console.log('saved');
document.getElementById("saved_msg").innerHTML = "Uploaded!! Refresh and see your voice message in the list below :<";
});
}
</script>
</body>
</html>

@ -1,3 +1,4 @@
<!DOCTYPE html>
<html lang="en"> <html lang="en">
<head> <head>
<meta charset="utf-8"> <meta charset="utf-8">
@ -21,8 +22,20 @@
include 'texts-radioactive/female_monstrosity.txt'; include 'texts-radioactive/female_monstrosity.txt';
?> ?>
</div></div></div></h1> </div></div></div></h1>
<!-- intro pop up -->
<div class="draggable radioactive popup"><span onclick="this.parentElement.style.display='none'" class="topleft">&times</span><div>
<?php
include 'texts-radioactive/about-popup.txt';
?>
</div></div>
<!-- <FORM ACTION="scripts/cgi-bin/echo.cgi"> -->
<!-- <h3>CONTRIBUTORS' VOICES</h3> --> <!-- <h3>CONTRIBUTORS' VOICES</h3> -->
<table class="radioactive"> <table width="100%" class="radioactive">
<tr> <tr>
<th><div class="tooltip-wrap">./collective_voice<div class="tooltip-content-right" >The speaker's voice is channeled through multiple voices and in this case through distorted mediated voices of the same person</div></div></th> <th><div class="tooltip-wrap">./collective_voice<div class="tooltip-content-right" >The speaker's voice is channeled through multiple voices and in this case through distorted mediated voices of the same person</div></div></th>
<th><div class="tooltip-wrap">./echo_voice<div class="tooltip-content-right">A voice that is more distant from the speaker, sounding like being <th><div class="tooltip-wrap">./echo_voice<div class="tooltip-content-right">A voice that is more distant from the speaker, sounding like being
@ -90,6 +103,8 @@ This script doesn't allow high frequencies to pass through</div></div></th>
?> ?>
</tr> </tr>
<!-- recorder --> <!-- recorder -->
<tr> <tr>
<td colspan="4"> <td colspan="4">
@ -112,12 +127,15 @@ This script doesn't allow high frequencies to pass through</div></div></th>
</div> </div>
<div class="tooltip-wrap"><i class="fa fa-copyright fa-flip-horizontal"></i><div class="tooltip-content-right" ><div> <div class="tooltip-wrap"><i class="fa fa-copyright fa-flip-horizontal"></i><div class="tooltip-content-right" ><div>
[Angeliki Diakrousi, Radioactive Monstrosities, 2020. Rotterdam]. [Angeliki Diakrousi, Radioactive Monstrosities, 2020. Rotterdam].
Copyleft: This is a free work, you can copy, distribute, and modify it under the terms of the Free Art License Copyleft: This is a free work, you can copy, distribute, and modify it under the terms of the <a href="http://artlibre.org/licence/lal/en/">Free Art License</a><br>
http://artlibre.org/licence/lal/en/ If you want your full name to appear in the contributors list send me your name here: angeliki@genderchangers.org. The voices files will be part of this platform and future performances.
You can choose the type of lisence you want
If you want your full name to appear in the contributors send me your name here: angeliki@genderchangers.org
The copyrights of the voices belong to you. I ask you to use them in the performance radio-active monstrosities where I will narrate ...Only for this recording. You can choose to include your name or not.Your voice recordings will be used and credited accordingly
</div></div></div> </div></div></div>
<br> <br>
<div class="tooltip-wrap"><i class="fa fa-file"></i><div class="tooltip-content-right" ><div> <div class="tooltip-wrap"><i class="fa fa-file"></i><div class="tooltip-content-right" ><div>
<?php <?php
@ -153,17 +171,8 @@ The copyrights of the voices belong to you. I ask you to use them in the perform
</tr> </tr>
</table> </table>
<?php
exec("~/virtualenvs/radioactive/bin/python3 scripts/echo.py").$name;
?>
<!-- <audio src='uploads/5/out.wav' controls></audio> -->
<!-- exec("~/virtualenvs/radioactive/bin/python3 mypythonscript.py " -->
<!-- exec("scripts/echo.py ".$name, $output); -->
@ -254,7 +263,6 @@ exec("~/virtualenvs/radioactive/bin/python3 scripts/echo.py").$name;
function upload(){ function upload(){
var name = prompt('Enter a title or/and your name','Unnamed clip'); var name = prompt('Enter a title or/and your name','Unnamed clip');
var type = prompt('Enter type of distortion. Choose between collective,echo,lowpitch,lowpass','No Type'); var type = prompt('Enter type of distortion. Choose between collective,echo,lowpitch,lowpass','No Type');
var license = prompt('Enter a license. Choose between collective,echo,lowpitch,lowpass','No Type');
var dataURL = document.getElementById("dataUrlcontainer").innerHTML; var dataURL = document.getElementById("dataUrlcontainer").innerHTML;
$.ajax({ $.ajax({
@ -264,10 +272,27 @@ exec("~/virtualenvs/radioactive/bin/python3 scripts/echo.py").$name;
base64: dataURL, base64: dataURL,
name: name, name: name,
type: type type: type
} },
success: function(data)
{
// ? :)
alert (data);
},
error : function(data)
{
alert("ajax error, json: " + data);
//for (var i = 0, l = json.length; i < l; ++i)
//{
// alert (json[i]);
//}
}
}).done(function(o) { }).done(function(o) {
console.log('saved'); console.log('saved');
document.getElementById("saved_msg").innerHTML = "Uploaded!! Refresh and see your voice message in the list below :<"; document.getElementById("saved_msg").innerHTML = "Uploaded!! Your voice will be distorted and re-uploaded in a few hours :<";
}); });
@ -275,6 +300,10 @@ exec("~/virtualenvs/radioactive/bin/python3 scripts/echo.py").$name;
</script> </script>
<!-- <form name="pyform" method="POST" action="/scripts/cgi-bin/echo.cgi">
<input type="text" name="fname" />
<input type="submit" name="submit" value="Submit" />
</form> -->
</body> </body>
</html> </html>

@ -0,0 +1,22 @@
#!/home/lain/virtualenvs/radioactive/bin/python3
import cgi,cgitb
cgitb.enable()
import pydub
from pydub import AudioSegment
from pysndfx import AudioEffectsChain
import sys
# print (sys.argv[1])
mp3_audio = AudioSegment.from_file("../../uploads/5/nainai_echo.mp3", format="mp3")
# mp3_audio = AudioSegment.from_file('page_echo.mp3', format="mp3")
mp3_audio.export("../../uploads/5/nainai_echo.wav", format="wav")
infile = '../../uploads/5/nainai_echo.wav'
outfile = '../../uploads/5/nainai_echo_output.wav'
AudioEffectsChain().reverb()(infile, outfile)
form = cgi.FieldStorage()

@ -0,0 +1,19 @@
#! /home/lain/virtualenvs/radioactive/bin/python3
# Import the package and create an audio effects chain function.
import pydub
from pydub import AudioSegment
from pysndfx import AudioEffectsChain
import sys
# print (sys.argv[1])
mp3_audio = AudioSegment.from_file(sys.argv[1], format="mp3")
# mp3_audio = AudioSegment.from_file('page_echo.mp3', format="mp3")
mp3_audio.export("temp.wav", format="wav")
infile = 'temp.wav'
# infile = sys.argv[1]
outfile = sys.argv[1]+'.wav'
AudioEffectsChain().reverb()(infile, outfile)

@ -0,0 +1,4 @@
# coding=utf-8
from .dsp import AudioEffectsChain
__all__ = ['AudioEffectsChain']

@ -0,0 +1,543 @@
# coding=utf-8
"""A lightweight Python wrapper of SoX's effects."""
import shlex
from io import BufferedReader, BufferedWriter
from subprocess import PIPE, Popen
import numpy as np
from .sndfiles import (
FileBufferInput,
FileBufferOutput,
FilePathInput,
FilePathOutput,
NumpyArrayInput,
NumpyArrayOutput,
logger,
)
def mutually_exclusive(*args):
return sum(arg is not None for arg in args) < 2
class AudioEffectsChain:
def __init__(self):
self.command = []
def equalizer(self, frequency, q=1.0, db=-3.0):
"""equalizer takes three parameters: filter center frequency in Hz, "q"
or band-width (default=1.0), and a signed number for gain or
attenuation in dB.
Beware of clipping when using positive gain.
"""
self.command.append('equalizer')
self.command.append(frequency)
self.command.append(str(q) + 'q')
self.command.append(db)
return self
def bandpass(self, frequency, q=1.0):
"""bandpass takes 2 parameters: filter center frequency in Hz and "q"
or band-width (default=1.0).
It gradually removes frequencies outside the band specified.
"""
self.command.append('bandpass')
self.command.append(frequency)
self.command.append(str(q) + 'q')
return self
def bandreject(self, frequency, q=1.0):
"""bandreject takes 2 parameters: filter center frequency in Hz and "q"
or band-width (default=1.0).
It gradually removes frequencies within the band specified.
"""
self.command.append('bandreject')
self.command.append(frequency)
self.command.append(str(q) + 'q')
return self
def lowshelf(self, gain=-20.0, frequency=100, slope=0.5):
"""lowshelf takes 3 parameters: a signed number for gain or attenuation
in dB, filter frequency in Hz and slope (default=0.5, maximum=1.0).
Beware of Clipping when using positive gain.
"""
self.command.append('bass')
self.command.append(gain)
self.command.append(frequency)
self.command.append(slope)
return self
def highshelf(self, gain=-20.0, frequency=3000, slope=0.5):
"""highshelf takes 3 parameters: a signed number for gain or
attenuation in dB, filter frequency in Hz and slope (default=0.5).
Beware of clipping when using positive gain.
"""
self.command.append('treble')
self.command.append(gain)
self.command.append(frequency)
self.command.append(slope)
return self
def highpass(self, frequency, q=0.707):
"""highpass takes 2 parameters: filter frequency in Hz below which
frequencies will be attenuated and q (default=0.707).
Beware of clipping when using high q values.
"""
self.command.append('highpass')
self.command.append(frequency)
self.command.append(str(q) + 'q')
return self
def lowpass(self, frequency, q=0.707):
"""lowpass takes 2 parameters: filter frequency in Hz above which
frequencies will be attenuated and q (default=0.707).
Beware of clipping when using high q values.
"""
self.command.append('lowpass')
self.command.append(frequency)
self.command.append(str(q) + 'q')
return self
def limiter(self, gain=3.0):
"""limiter takes one parameter: gain in dB.
Beware of adding too much gain, as it can cause audible
distortion. See the compand effect for a more capable limiter.
"""
self.command.append('gain')
self.command.append('-l')
self.command.append(gain)
return self
def normalize(self):
"""normalize has no parameters.
It boosts level so that the loudest part of your file reaches
maximum, without clipping.
"""
self.command.append('gain')
self.command.append('-n')
return self
def compand(self, attack=0.2, decay=1, soft_knee=2.0, threshold=-20, db_from=-20.0, db_to=-20.0):
"""compand takes 6 parameters:
attack (seconds), decay (seconds), soft_knee (ex. 6 results
in 6:1 compression ratio), threshold (a negative value
in dB), the level below which the signal will NOT be companded
(a negative value in dB), the level above which the signal will
NOT be companded (a negative value in dB). This effect
manipulates dynamic range of the input file.
"""
self.command.append('compand')
self.command.append(str(attack) + ',' + str(decay))
self.command.append(str(soft_knee) + ':' + str(threshold) + ',' + str(db_from) + ',' + str(db_to))
return self
def sinc(self,
high_pass_frequency=None,
low_pass_frequency=None,
left_t=None,
left_n=None,
right_t=None,
right_n=None,
attenuation=None,
beta=None,
phase=None,
M=None,
I=None,
L=None):
"""sinc takes 12 parameters:
high_pass_frequency in Hz,
low_pass_frequency in Hz,
left_t,
left_n,
right_t,
right_n,
attenuation in dB,
beta,
phase,
M,
I,
L
This effect creates a steep bandpass or
bandreject filter. You may specify as few as the first two
parameters. Setting the high-pass parameter to a lower value
than the low-pass creates a band-reject filter.
"""
self.command.append("sinc")
if not mutually_exclusive(attenuation, beta):
raise ValueError("Attenuation (-a) and beta (-b) are mutually exclusive arguments.")
if attenuation is not None and beta is None:
self.command.append('-a')
self.command.append(str(attenuation))
elif attenuation is None and beta is not None:
self.command.append('-b')
self.command.append(str(beta))
if not mutually_exclusive(phase, M, I, L):
raise ValueError("Phase (-p), -M, L, and -I are mutually exclusive arguments.")
if phase is not None:
self.command.append('-p')
self.command.append(str(phase))
elif M is not None:
self.command.append('-M')
elif I is not None:
self.command.append('-I')
elif L is not None:
self.command.append('-L')
if not mutually_exclusive(left_t, left_t):
raise ValueError("Transition bands options (-t or -n) are mutually exclusive.")
if left_t is not None:
self.command.append('-t')
self.command.append(str(left_t))
if left_n is not None:
self.command.append('-n')
self.command.append(str(left_n))
if high_pass_frequency is not None and low_pass_frequency is None:
self.command.append(str(high_pass_frequency))
elif high_pass_frequency is not None and low_pass_frequency is not None:
self.command.append(str(high_pass_frequency) + '-' + str(low_pass_frequency))
elif high_pass_frequency is None and low_pass_frequency is not None:
self.command.append(str(low_pass_frequency))
if not mutually_exclusive(right_t, right_t):
raise ValueError("Transition bands options (-t or -n) are mutually exclusive.")
if right_t is not None:
self.command.append('-t')
self.command.append(str(right_t))
if right_n is not None:
self.command.append('-n')
self.command.append(str(right_n))
return self
def bend(self, bends, frame_rate=None, over_sample=None):
"""TODO Add docstring."""
self.command.append("bend")
if frame_rate is not None and isinstance(frame_rate, int):
self.command.append('-f %s' % frame_rate)
if over_sample is not None and isinstance(over_sample, int):
self.command.append('-o %s' % over_sample)
for bend in bends:
self.command.append(','.join(bend))
return self
def chorus(self, gain_in, gain_out, decays):
"""TODO Add docstring."""
self.command.append("chorus")
self.command.append(gain_in)
self.command.append(gain_out)
for decay in decays:
modulation = decay.pop()
numerical = decay
self.command.append(' '.join(map(str, numerical)) + ' -' + modulation)
return self
def delay(self,
gain_in=0.8,
gain_out=0.5,
delays=list((1000, 1800)),
decays=list((0.3, 0.25)),
parallel=False):
"""delay takes 4 parameters: input gain (max 1), output gain
and then two lists, delays and decays.
Each list is a pair of comma seperated values within
parenthesis.
"""
self.command.append('echo' + ('s' if parallel else ''))
self.command.append(gain_in)
self.command.append(gain_out)
self.command.extend(list(sum(zip(delays, decays), ())))
return self
def echo(self, **kwargs):
"""TODO Add docstring."""
self.delay(**kwargs)
def fade(self):
"""TODO Add docstring."""
raise NotImplementedError()
def flanger(self, delay=0, depth=2, regen=0, width=71, speed=0.5, shape='sine', phase=25, interp='linear'):
"""TODO Add docstring."""
raise NotImplementedError()
def gain(self, db):
"""gain takes one paramter: gain in dB."""
self.command.append('gain')
self.command.append(db)
return self
def mcompand(self):
"""TODO Add docstring."""
raise NotImplementedError()
def noise_reduction(self, amount=0.5):
"""TODO Add docstring."""
# TODO Run sox once with noiseprof on silent portions to generate a noise profile.
raise NotImplementedError()
def oops(self):
"""TODO Add docstring."""
raise NotImplementedError()
def overdrive(self, gain=20, colour=20):
"""overdrive takes 2 parameters: gain in dB and colour which effects
the character of the distortion effet.
Both have a default value of 20. TODO - changing color does not seem to have an audible effect
"""
self.command.append('overdrive')
self.command.append(gain)
self.command.append(colour)
return self
def phaser(self,
gain_in=0.9,
gain_out=0.8,
delay=1,
decay=0.25,
speed=2,
triangular=False):
"""phaser takes 6 parameters: input gain (max 1.0), output gain (max
1.0), delay, decay, speed and LFO shape=trianglar (which must be set to
True or False)"""
self.command.append("phaser")
self.command.append(gain_in)
self.command.append(gain_out)
self.command.append(delay)
self.command.append(decay)
self.command.append(speed)
if triangular:
self.command.append('-t')
else:
self.command.append('-s')
return self
def pitch(self, shift,
use_tree=False,
segment=82,
search=14.68,
overlap=12):
"""pitch takes 4 parameters: user_tree (True or False), segment, search
and overlap."""
self.command.append("pitch")
if use_tree:
self.command.append('-q')
self.command.append(shift)
self.command.append(segment)
self.command.append(search)
self.command.append(overlap)
return self
def loop(self):
"""TODO Add docstring."""
self.command.append('repeat')
self.command.append('-')
return self
def reverb(self,
reverberance=50,
hf_damping=50,
room_scale=100,
stereo_depth=100,
pre_delay=20,
wet_gain=0,
wet_only=False):
"""reverb takes 7 parameters: reverberance, high-freqnency damping,
room scale, stereo depth, pre-delay, wet gain and wet only (True or
False)"""
self.command.append('reverb')
if wet_only:
self.command.append('-w')
self.command.append(reverberance)
self.command.append(hf_damping)
self.command.append(room_scale)
self.command.append(stereo_depth)
self.command.append(pre_delay)
self.command.append(wet_gain)
return self
def reverse(self):
"""reverse takes no parameters.
It plays the input sound backwards.
"""
self.command.append("reverse")
return self
def speed(self, factor, use_semitones=False):
"""speed takes 2 parameters: factor and use-semitones (True or False).
When use-semitones = False, a factor of 2 doubles the speed and raises the pitch an octave. The same result is achieved with factor = 1200 and use semitones = True.
"""
self.command.append("speed")
self.command.append(factor if not use_semitones else str(factor) + "c")
return self
def synth(self):
raise NotImplementedError()
def tempo(self,
factor,
use_tree=False,
opt_flag=None,
segment=82,
search=14.68,
overlap=12):
"""tempo takes 6 parameters: factor, use tree (True or False), option
flag, segment, search and overlap).
This effect changes the duration of the sound without modifying
pitch.
"""
self.command.append("tempo")
if use_tree:
self.command.append('-q')
if opt_flag in ('l', 'm', 's'):
self.command.append('-%s' % opt_flag)
self.command.append(factor)
self.command.append(segment)
self.command.append(search)
self.command.append(overlap)
return self
def tremolo(self, freq, depth=40):
"""tremolo takes two parameters: frequency and depth (max 100)"""
self.command.append("tremolo")
self.command.append(freq)
self.command.append(depth)
return self
def trim(self, positions):
"""TODO Add docstring."""
self.command.append("trim")
for position in positions:
# TODO: check if the position means something
self.command.append(position)
return self
def upsample(self, factor):
"""TODO Add docstring."""
self.command.append("upsample")
self.command.append(factor)
return self
def vad(self):
raise NotImplementedError()
def vol(self, gain, type="amplitude", limiter_gain=None):
"""vol takes three parameters: gain, gain-type (amplitude, power or dB)
and limiter gain."""
self.command.append("vol")
if type in ["amplitude", "power", "dB"]:
self.command.append(type)
else:
raise ValueError("Type has to be dB, amplitude or power.")
if limiter_gain is not None:
self.command.append(str(limiter_gain))
print(self.command)
return self
def custom(self, command):
"""Run arbitrary SoX effect commands.
Examples:
custom('echo 0.8 0.9 1000 0.3') for an echo effect.
References:
- https://linux.die.net/man/1/soxexam
- http://sox.sourceforge.net/sox.html
- http://tldp.org/LDP/LG/issue73/chung.html
- http://dsl.org/cookbook/cookbook_29.html
"""
self.command.append(command)
return self
def __call__(
self,
src,
dst=np.ndarray,
sample_in=44100, # used only for arrays
sample_out=None,
encoding_out=None,
channels_out=None,
allow_clipping=True):
# depending on the input, using the right object to set up the input data arguments
stdin = None
if isinstance(src, str):
infile = FilePathInput(src)
stdin = src
elif isinstance(src, np.ndarray):
infile = NumpyArrayInput(src, sample_in)
stdin = src
elif isinstance(src, BufferedReader):
infile = FileBufferInput(src)
stdin = infile.data # retrieving the data from the file reader (np array)
else:
infile = None
# finding out which output encoding to use in case the output is ndarray
if encoding_out is None and dst is np.ndarray:
if isinstance(stdin, np.ndarray):
encoding_out = stdin.dtype.type
elif isinstance(stdin, str):
encoding_out = np.float32
# finding out which channel count to use (defaults to the input file's channel count)
if channels_out is None:
channels_out = infile.channels
if sample_out is None: # if the output samplerate isn't specified, default to input's
sample_out = sample_in
# same as for the input data, but for the destination
if isinstance(dst, str):
outfile = FilePathOutput(dst, sample_out, channels_out)
elif dst is np.ndarray:
outfile = NumpyArrayOutput(encoding_out, sample_out, channels_out)
elif isinstance(dst, BufferedWriter):
outfile = FileBufferOutput(dst, sample_out, channels_out)
else:
outfile = None
cmd = shlex.split(
' '.join([
'sox',
'-N',
'-V1' if allow_clipping else '-V2',
infile.cmd_prefix if infile is not None else '-d',
outfile.cmd_suffix if outfile is not None else '-d',
] + list(map(str, self.command))),
posix=False,
)
logger.debug("Running command : %s" % cmd)
if isinstance(stdin, np.ndarray):
stdout, stderr = Popen(cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE).communicate(stdin.tobytes(order='F'))
else:
stdout, stderr = Popen(cmd, stdout=PIPE, stderr=PIPE).communicate()
if stderr:
raise RuntimeError(stderr.decode())
elif stdout:
outsound = np.fromstring(stdout, dtype=encoding_out)
if channels_out > 1:
outsound = outsound.reshape((channels_out, int(len(outsound) / channels_out)), order='F')
if isinstance(outfile, FileBufferOutput):
outfile.write(outsound)
return outsound

@ -0,0 +1,99 @@
import logging
import shlex
import wave
from subprocess import PIPE, Popen
import numpy as np
ENCODINGS_MAPPING = {
np.int16: 's16',
np.float32: 'f32',
np.float64: 'f64',
}
PIPE_CHAR = '-'
logger = logging.getLogger('pysndfx')
class SoxInput(object):
pipe = '-'
def __init__(self):
self.cmd_prefix = None
class FilePathInput(SoxInput):
def __init__(self, filepath):
super(FilePathInput, self).__init__()
info_cmd = 'sox --i -c ' + filepath
logger.debug("Running info command : %s" % info_cmd)
stdout, stderr = Popen(shlex.split(info_cmd, posix=False), stdout=PIPE, stderr=PIPE).communicate()
self.channels = int(stdout)
self.cmd_prefix = filepath
class FileBufferInput(SoxInput):
def __init__(self, fp):
super(FileBufferInput, self).__init__()
wave_file = wave.open(fp, mode='rb') # wave.open() seems to support only 16bit encodings
self.channels = wave_file.getnchannels()
self.data = np.frombuffer(wave_file.readframes(wave_file.getnframes()), dtype=np.int16)
self.cmd_prefix = ' '.join([
'-t s16', # int16 encoding by default
'-r ' + str(wave_file.getframerate()),
'-c ' + str(self.channels),
PIPE_CHAR,
])
class NumpyArrayInput(SoxInput):
def __init__(self, snd_array, rate):
super(NumpyArrayInput, self).__init__()
self.channels = snd_array.ndim
self.cmd_prefix = ' '.join([
'-t ' + ENCODINGS_MAPPING[snd_array.dtype.type],
'-r ' + str(rate),
'-c ' + str(self.channels),
PIPE_CHAR,
])
class SoxOutput(object):
def __init__(self):
self.cmd_suffix = None
class FilePathOutput(SoxOutput):
def __init__(self, filepath, samplerate, channels):
super(FilePathOutput, self).__init__()
self.cmd_suffix = ' '.join(['-r ' + str(samplerate), '-c ' + str(channels), filepath])
class FileBufferOutput(SoxOutput):
def __init__(self, fp, samplerate, channels):
super(FileBufferOutput, self).__init__()
self.writer = wave.open(fp, mode='wb')
self.writer.setnchannels(channels)
self.writer.setframerate(samplerate)
self.writer.setsampwidth(2)
self.cmd_suffix = ' '.join([
'-t ' + ENCODINGS_MAPPING[np.int16],
'-r ' + str(samplerate),
'-c ' + str(channels),
PIPE_CHAR,
])
def write(self, data):
self.writer.writeframesraw(data)
class NumpyArrayOutput(SoxOutput):
def __init__(self, encoding, samplerate, channels):
super(NumpyArrayOutput, self).__init__()
self.cmd_suffix = ' '.join([
'-t ' + ENCODINGS_MAPPING[encoding],
'-r ' + str(samplerate),
'-c ' + str(channels),
PIPE_CHAR,
])

@ -1,27 +0,0 @@
# Import the package and create an audio effects chain function.
from pydub import AudioSegment
from pysndfx import AudioEffectsChain
import sys
mp3_audio = AudioSegment.from_file(sys.argv[1], format="mp3")
mp3_audio.export("../uploads/5/page_echo.wav", format="wav")
infile = '../uploads/5/page_echo.wav'
outfile = '../uploads/5/out.wav'
# Apply phaser and reverb directly to an audio file.
#fx(infile, outfile)
AudioEffectsChain().reverb()(infile, outfile)
# Or, apply the effects directly to a ndarray.
#from librosa import load
#y, sr = load(infile, sr=None)
#y = fx(y)
# Apply the effects and return the results as a ndarray.
#y = fx(infile)
# Apply the effects to a ndarray but store the resulting audio to disk.
#fx(x, outfile)

@ -0,0 +1,4 @@
import sys
sys.argv[1]='well'
print (a)

@ -1,5 +1,9 @@
<?php <?php
ini_set('display_startup_errors', 1);
ini_set('display_errors', 1);
error_reporting(-1); error_reporting(-1);
ini_set("display_errors", "On"); ini_set("display_errors", "On");
// requires php5 // requires php5
define('UPLOAD_DIR', '../uploads/5/'); define('UPLOAD_DIR', '../uploads/5/');
@ -10,6 +14,19 @@ $data = base64_decode($img);
#$file = UPLOAD_DIR . uniqid() . '.mp3'; #$file = UPLOAD_DIR . uniqid() . '.mp3';
$file = UPLOAD_DIR . $_POST['name'] . '_'.$_POST['type'] .'.mp3'; $file = UPLOAD_DIR . $_POST['name'] . '_'.$_POST['type'] .'.mp3';
$success = file_put_contents($file, $data); $success = file_put_contents($file, $data);
$distortedfile= $_POST['name'] . '_'.$_POST['type'] .'.wav';
// $command = escapeshellcmd('/home/lain/virtualenvs/radioactive/bin/python3 python-audio-effects/echo.py '.$file);
// $output = shell_exec($command);
// echo $output;
// if type=echo "string";
shell_exec('ffmpeg -i python-audio-effects/try_echo.mp3 python-audio-effects/try_echo.wav');
$command = '/home/lain/virtualenvs/radioactive/bin/python3 python-audio-effects/echo.py python-audio-effects/temp.wav';
$output=shell_exec($command);
print $output;
# make a database of recordings # make a database of recordings
$item=array('name'=>$_POST['name'],'type'=>$_POST['type'],'date'=>date("d/m/Y"), 'file'=>$file); $item=array('name'=>$_POST['name'],'type'=>$_POST['type'],'date'=>date("d/m/Y"), 'file'=>$file);

@ -1,14 +1,3 @@
/*@media only screen and (min-width: 900px) {*/
body {
background: #F6F5F5;
/*background: #fcf4f6;*/
font-family: "Old Standard TT";
font-size: 95%;
line-height: 1.3;
letter-spacing: 1px;
padding: 20px;
/*transform: scale(1.0);*/
}
section { section {
display: block; display: block;
@ -66,22 +55,6 @@ a img {
font-weight: bold; font-weight: bold;
} }
h1 {
font-size: 150%;
font-style: italic;
text-align: center;
letter-spacing: 5px;
color:black;
animation: color-change 1s infinite;
animation-name: example;
animation-duration: 4s;
animation-play-state: running;
}
@keyframes example {
from {color: #FF00FF;}
to {color:purple;}
}
h2 { h2 {
@ -154,12 +127,6 @@ h3 {
text-align: center; text-align: center;
} }
table, th, td {
vertical-align: top;
text-align: left;
border-collapse: separate;
padding:20px;
}
button { button {
width: 100px; width: 100px;
@ -201,7 +168,7 @@ button {
}*/ }*/
.audio-mini { .audio-mini {
width: 50px; width: 100%;
} }
.dropbtn { .dropbtn {
@ -367,31 +334,10 @@ margin-bottom:4rem;
min-width:5500% !important; min-width:5500% !important;
} }
.tooltip-wrap .tooltip-content-right {
display: none;
position: absolute;
z-index: 1;
top: 100%;
/*bottom: 100%;*/
left: 1.5rem;
/*right: 100%;*/
padding: 0.6em;
background-color: white;
border: none;
text-align: left;
min-width: 20em;
border: 1px solid black;
font-size: 13pt !important;
font-weight: normal !important;
font-style: normal !important;
text-align: left !important;
letter-spacing: 0px !important;
word-wrap: break-word;
box-shadow: 5px 5px rgba(255,0,255,0.5);
}
.fa-file { .fa-file {
width:100rem !important; width:80rem !important;
} }
.tooltip-wrap .tooltip-content-up { .tooltip-wrap .tooltip-content-up {
@ -535,7 +481,7 @@ width:100rem !important;
.topleft { .topleft {
float: right; float: right;
cursor: pointer; cursor: pointer;
font-size: 100%; font-size: 2rem;
} }
.container .rowcircle { .container .rowcircle {
@ -552,20 +498,6 @@ width:100rem !important;
width: 100% !important; width: 100% !important;
} }
.draggable {
box-shadow: 5px 5px 10px rgba(0, 0, 0, 0.2);
background-color: white;
cursor: all-scroll;
position: absolute !important;
width: 15%;
display: inline;
/* transform: scale(20);*/
/*min-height: 100px;*/
background-image: url("../images/resize-icon.png");
background-position: 100% 100%;
background-size: 30px 30px;
background-repeat: no-repeat;
}
.draggable-circle { .draggable-circle {
box-shadow: 5px 5px 10px rgba(0, 0, 0, 0.2); box-shadow: 5px 5px 10px rgba(0, 0, 0, 0.2);
@ -740,7 +672,6 @@ li, #angela, #judith, #laurie, #dana, #katalin {
width: 20%; width: 20%;
/*height: 100px;*/ /*height: 100px;*/
} }
.ciclegraph .circle:hover { .ciclegraph .circle:hover {
@ -750,21 +681,224 @@ li, #angela, #judith, #laurie, #dana, #katalin {
/*}*/ /*}*/
@media screen and (min-width: 300px){
body {
background: #F6F5F5;
/*background: #fcf4f6;*/
font-family: "Old Standard TT";
font-size: 1em;
line-height: 1.3;
letter-spacing: 1px;
padding: 1em;
}
h1 {
font-size: 2em;
font-style: italic;
text-align: center;
letter-spacing: 5px;
color:black;
animation: color-change 1s infinite;
animation-name: example;
animation-duration: 4s;
animation-play-state: running;
}
@keyframes example {
/* from {color: #FF00FF;}
to {color:purple;}*/
0% {color: #FF00FF;}
25% {color: #ff00bf;}
50% {color: #ffd6e0;}
75% {color: #fedf2e;}
100% {color:#e600e6;}
}
.draggable {
box-shadow: 5px 5px rgba(255,0,255,0.5);
background-color: white;
cursor: all-scroll;
position: absolute !important;
display: inline;
/* transform: scale(20);*/
/*min-height: 100px;*/
border: none;
text-align: left;
max-width: 60%;
border: 1px solid black;
padding: 0.8em;
z-index: 1;
font-size: 1.2em;
}
.tooltip-wrap .tooltip-content-right {
display: none;
position: absolute;
z-index: 1;
/*top: 100%;*/
/*bottom: 100%;*/
left: 1.5rem;
/*right: 100%;*/
top: 20%;
padding: 0.6em;
background-color: white;
border: none;
text-align: left;
min-width: 20em;
border: 1px solid black;
font-weight: normal !important;
font-style: normal !important;
text-align: left !important;
letter-spacing: 0px !important;
word-wrap: break-word;
box-shadow: 5px 5px rgba(255,0,255,0.5);
}
/*radioactive */ /*radioactive */
.radioactive td { .radioactive td {
overflow: auto; overflow: auto;
width: 20%; max-width: 2%;
height: 200px;
border: 1px solid black; border: 1px solid black;
padding: 2%; padding: 2%;
vertical-align: top;
} }
.radioactive { table, th {
vertical-align: top;
text-align: left;
border-collapse: separate;
padding:20px;
}
/*.radioactive {
width: 100% !important; width: 100% !important;
}*/
.radioactive audio{
width: 40%;
}
.radioactive button, .recorder input {
background-color: #F6F5F5;
border: 1px solid black;
padding: 20px 20px;
text-align: center;
text-decoration: none;
display: inline-block;
margin: 2px 2px;
cursor: pointer;
width:300px;
font-size: 1em;
border-radius: 25px;
}
.popup {
background-color: #fedf2e;
}
} }
@media screen and (min-width: 1000px){
body {
background: #F6F5F5;
/*background: #fcf4f6;*/
font-family: "Old Standard TT";
font-size: 1.4em;
line-height: 1.3;
letter-spacing: 1px;
padding: 20px;
}
h1 {
font-size: 2em;
font-style: italic;
text-align: center;
letter-spacing: 5px;
color:black;
animation: color-change 1s infinite;
animation-name: example;
animation-duration: 4s;
animation-play-state: running;
}
@keyframes example {
/* from {color: #FF00FF;}
to {color:purple;}*/
0% {color: #FF00FF;}
25% {color: #ff00bf;}
50% {color: #ffd6e0;}
75% {color: #fedf2e;}
100% {color:#e600e6;}
}
.draggable {
box-shadow: 5px 5px rgba(255,0,255,0.5);
background-color: white;
cursor: all-scroll;
position: absolute !important;
display: inline;
/* transform: scale(20);*/
/*min-height: 100px;*/
border: none;
text-align: left;
max-width: 50%;
border: 1px solid black;
padding: 0.8em;
z-index: 1;
font-size: 1.2em;
}
.tooltip-wrap .tooltip-content-right {
display: none;
position: absolute;
z-index: 1;
/*top: 100%;*/
/*bottom: 100%;*/
left: 1.5rem;
/*right: 100%;*/
top: 20%;
padding: 0.6em;
background-color: white;
border: none;
text-align: left;
min-width: 20em;
border: 1px solid black;
font-weight: normal !important;
font-style: normal !important;
text-align: left !important;
letter-spacing: 0px !important;
word-wrap: break-word;
box-shadow: 5px 5px rgba(255,0,255,0.5);
}
/*radioactive */
.radioactive td {
overflow: auto;
max-width: 20%;
height: 200px;
border: 1px solid black;
padding: 2%;
vertical-align: top;
}
table, th {
vertical-align: top;
text-align: left;
border-collapse: separate;
padding:20px;
}
/*.radioactive {
width: 100% !important;
}*/
.radioactive audio{ .radioactive audio{
width: 70px; width: 40%;
} }
.radioactive button, .recorder input { .radioactive button, .recorder input {
@ -777,6 +911,13 @@ li, #angela, #judith, #laurie, #dana, #katalin {
margin: 2px 2px; margin: 2px 2px;
cursor: pointer; cursor: pointer;
width:300px; width:300px;
font-size: 1em;
border-radius: 25px; border-radius: 25px;
font-size: 100%; }
.popup {
background-color: #fedf2e;
}
} }

@ -0,0 +1,12 @@
<!doctype html>
<html>
<head>
<title>demo web file python3.2</title>
</head>
<body>
<form name="pyform" method="POST" action="/scripts/cgi-bin/echo.cgi">
<input type="text" name="fname" />
<input type="submit" name="submit" value="Submit" />
</form>
</body>
</html>

@ -0,0 +1,4 @@
This web-audio interface was made by <a href="https://w-i-t-m.net" target="
_">Angeliki Diakrousi</a>. It challenges the ways we are listening to certain female sounding voices that are perceived as inappropriate - because of the quality of their sound, their gender, the medium's distortions but also stereotypes and collective memories that they often awake. These are verbal expressions that have been associated with forms of monstrosity since ancient times. Contributors are invited to record themselves with their own microphones, expressing their thoughts and choose a type of distortion. They are invited to participate in forming new imaginaries around a technologically mediated collective voice that through its 'monstrosity' can reveal other forms of speech that embrace their damage. They can choose what type of mediated voice they want to do that. A series of writings reveals stories and theories on the topic. The interface allows any voice to be recorded and saved in the artist's server where the website is hosted. There they get distorted through several scripts. Then the new sounds are categorized, depending on the chosen distortion, and published back in the platform. They can be played at the same time or in any desired order. The interface was first made for the performance "Radioactive Monstrosities" as part of the event <a href="https://www.facebook.com/events/524864625047957/"> NL_CL #2 : FLESH</a> hosted by <a href="https://instrumentinventors.org/">iii</a> and <a href="https://netherlands-coding-live.github.io/">Netherlands Coding Live</a> and its conceptualization is an ongoing process related to my graduation research <a href="eaiaiaiaoi.w-i-t-m.net"> Let's Amplify Unspeakable Things</a> and previous <a href="http://w-i-t-m.net/2019/radio-active-female-monstrosity_2019.html">performances </a><p>
<i>Credits</i>: the platform is inspired by Vocable Code <a href="http://siusoon.net/vocable-code/">http://siusoon.net/vocable-code/</a>. Regarding the references I would like to thank Alice, Gert, Joana for sharing material with me, as well Amy for her publication made for the workshop <a href="http://w-i-t-m.net/2020/ecstatic-speech-2020.html">Eclectic Speech</a>. Thanks to the people that contributed and donated their voices.<p>

@ -1,19 +1,5 @@
This web-audio interface challenges the ways we are listening to certain female sounding voices that are perceived as inappropriate - because of the quality of their sound, their gender, the medium's distortions but also stereotypes and collective memories that they often awake. These are verbal expressions that have been associated with forms of monstrosity since ancient times. Contributors are invited to record themselves with their own microphones, expressing their thoughts and choose a type of distortion. They are invited to participate in forming new imaginaries around a technologically mediated collective voice that through its 'monstrosity' can reveal other forms of speech that embrace their damage. They can choose what type of mediated voice they want to do that. A series of writings reveals stories and theories on the topic. The interface allows any voice to be recorded and saved in the artist's server where the website is hosted. There they get distorted through several scripts. Then the new sounds are categorized, depending on the chosen distortion, and published back in the platform. They can be played at the same time or in any desired order.<p> This web-audio interface was made by <a href="https://w-i-t-m.net" target="
_">Angeliki Diakrousi</a>. It challenges the ways we are listening to certain female sounding voices that are perceived as inappropriate - because of the quality of their sound, their gender, the medium's distortions but also stereotypes and collective memories that they often awake. These are verbal expressions that have been associated with forms of monstrosity since ancient times. Contributors are invited to record themselves with their own microphones, expressing their thoughts and choose a type of distortion. They are invited to participate in forming new imaginaries around a technologically mediated collective voice that through its 'monstrosity' can reveal other forms of speech that embrace their damage. They can choose what type of mediated voice they want to do that. A series of writings reveals stories and theories on the topic. The interface allows any voice to be recorded and saved in the artist's server where the website is hosted. There they get distorted through several scripts. Then the new sounds are categorized, depending on the chosen distortion, and published back in the platform. They can be played at the same time or in any desired order. The interface was first made for the performance "Radioactive Monstrosities" as part of the event <a href="https://www.facebook.com/events/524864625047957/"> NL_CL #2 : FLESH</a> hosted by <a href="https://instrumentinventors.org/">iii</a> and <a href="https://netherlands-coding-live.github.io/">Netherlands Coding Live</a> and its conceptualization is an ongoing process related to my graduation research <a href="eaiaiaiaoi.w-i-t-m.net"> Let's Amplify Unspeakable Things</a> and previous <a href="http://w-i-t-m.net/2019/radio-active-female-monstrosity_2019.html">performances </a><p>
<i>Credits</i>: the platform is inspired by Vocable Code <a href="http://siusoon.net/vocable-code/">http://siusoon.net/vocable-code/</a>. Regarding the references I would like to thank Alice, Gert, Joana for sharing material with me, as well Amy for her publication made for the workshop <a href="http://w-i-t-m.net/2020/ecstatic-speech-2020.html">Eclectic Speech</a>. Thanks to the people that contributed and donated their voices.<p>
<div style="color:#A19696"><i>Some more thoughts:</i> Women's voices are often going through censorship and critique when they appear in public. They have to be adjusted and filtered in order to be heard and avoid silencing. This is extended to the technological apparatus that channel their voices, like radio. The listeners then, are hearing a distorted voice that expresses needs and opinions of its physical proprietor and it goes beyond its control. In this work I focus on the sound of that voice that is asking to become part of public dialogues. In the platform I refer to several examples that such voices have been through some kind of distortion. But it is not only women's voices that are going through these filters. It is also the queer, any feminine sounding voice, the collective, the resistant, the black and so many more. The examples I refer to have attracted my attention and are triggering aspects of my personal identity. This platform intends to welcome any case of medium's transformations that certain voices are going through as an extension of the society's censorship. I want to open the dialogue to more communities and people that have noticed an exclusion through mediated speech platforms. I invite you to reclaim these distortions that characterize excluded voices. Their distortion becomes their quality that is also their damage, through which needs, opinions, desires are expressed. I made a set of tools that distort the voice in ways that are borrowed from the examples I have encountered in my research. I invite you to choose the voice that you think reflects your personal needs, damages or you want to talk through it and become part of this dialogue and share your own experience or your imaginations about this voice. Sometimes this mediated voice is safer as it camouflages the actual vocal identity, and become anonymous.</div><p>
n: Women apologize or prove that what they say make sense [ref to text of zapatista].
Women's voices are often going through censorship and critique when they appear in public. They have to be adjusted and filtered in order to be heard and avoid silencing. This is extended to the technological apparatus/the medium used to channel their voices, like the radio. The listeners then, are hearing a distorted voice that expresses needs and opinions of its physical proprietor and it goes beyond its control. In this work I focus on the sound of that voice that is asking to become part of public dialogues. In the platform I refer to several examples that such voices have been through some distortion. But it is not only women's voices that are going through these filters. It is also the queer, any feminine sounding voice, the collective, the resistant, the black. The examples I refer to have attracted my attention and are situated in my personal identity. This platform intends to welcome any case of medium's transformations that certain voices are going through as an extension of the society's censorship. I want to open the dialogue to more communities and people that have noticed an exclusion through mediated speech platforms.
I invite you to reclaim these distortions that characterize excluded voices. Their distortion becomes their quality that is also their damage, through which needs and rights are expressed [text of queer damaging]. I made a set of tools that distort the voice in ways borrowed from the examples I have encountered in my research. I invite you to choose the voice that you think reflects your personal needs, damages or you want to talk through it and become part of this dialogue and share your own experience or your imaginations about this voice. Sometimes this mediated voice is safer as it camouflages the actual identity of our voice, and become anonymous, like the example of witness voices. Categorize the distortions
The interface was first made for the performance "Radioactive Monstrosities" as part of the event <a href="https://www.facebook.com/events/524864625047957/"> NL_CL #2 : FLESH</a> hosted by <a href="https://instrumentinventors.org/">iii</a> and <a href="https://netherlands-coding-live.github.io/">Netherlands Coding Live</a> and its conceptualization is an ongoing process related to my graduation research <a href="eaiaiaiaoi.w-i-t-m.net"> Let's Amplify Unspeakable Things</a> and previous <a href="http://w-i-t-m.net/2019/radio-active-female-monstrosity_2019.html">performances </a>
Credits: the platform is inspired by Vocable Code <a href="http://siusoon.net/vocable-code/">http://siusoon.net/vocable-code/</a>. Regarding the references thanks Alice, Gert, Joana for sharing with me, as well Amy for her publication made for the workshop <a href="http://w-i-t-m.net/2020/ecstatic-speech-2020.html">Eclectic Speech</a>.
My fascination started through these perfoamnces that I want to open through them with a possibility in the future to become a more open dialogue.
(Technology is seamlessly related to socity and its dynamics amd biases)
I will narrate/describe what it is about and I ask you to contribute by adding your comment
Women have to transform their voices or train more in order to be heard. They are 'monsters' anyway

@ -1,9 +1,7 @@
Instructions:<p> Instructions:<p>
Press record<br> *Press record and allow microphone share<br>
Speak to the microphone<br> *Speak to the microphone for a couple of seconds (max size:1M). The texts may be a source of inspiration<br>
Press stop<br> *Press stop and listen to it <br>
Listen to it <br> *Press upload<br>
Press upload<br> *Type your name or nickname<br>
Type your name or nickname<br> *Choose a type of distortion and license<br>
Choose the distortion<br>
Choose license<br>

@ -1,4 +1,30 @@
We often feel uncanny listening back to our voices through phones, video calls, voice messages. Imagine what a break through radio was when it became public and accessible to everyone. We often feel uncanny listening back to our voices and their echoes through phones, video calls, voice messages. Imagine what a radical change radio brought in the experience for listening to each other, when it became public and accessible to everyone. I would like to think of echo in a more metaphorical sense, regarding the mediated female voices. I imagine a voice that has a special quality that creates the feeling that exists in different temporal spaces. I am thinking of gossip as an echo effect where messages are spread fast and collectively through a sequence of voices. Here an extract from a book of Silvia Federici:<p>
<div style="color:#A19696">
[1]**Gossiping and the Formation of a Female Viewpoint<p>
Gossip today designates informal talk, often damaging to those that are its object. It is mostly talk that draws its satisfaction from an irresponsible disparaging of others; it is circulation of information not intended for the public ear but capable of ruining peoples reputations, and it is unequivocally womens talk.
It is women who gossip, presumably having nothing better to do and having less access to real knowledge and information and a structural inability to construct factually based, rational discourses. Thus, gossip is an integral part of the devaluation of womens personality and work, especially domestic work, reputedly the ideal terrain on which this practice flourishes.
This conception of gossip, as we have seen, emerged in a particular historical context. Viewed from the perspective of other cultural traditions, this idle womens talk would actually appear quite different. In many parts of the world, women have historically been seen as the weavers of memory—those who keep alive the voices of the past and the histories of the communities, who transmit them to the future generations and, in so doing, create a collective identity and profound sense of cohesion. They are also those who hand down acquired knowledges and wisdoms—concerning medical remedies, the problems of the heart, and the understanding of human behavior, starting with that of men. Labeling all this production of knowledge gossip is part of the degradation of women—it is a continuation of the demonologists construction of the stereotypical woman as prone to malignity, envious of other peoples wealth and power, and ready to lend an ear to the Devil. It is in this way that women have been silenced and to this day excluded from many places where decisions are taken, deprived of the possibility of defining their own experience, and forced to cope with mens misogynous or idealized portraits of them. But we are regaining our knowledge. As a woman recently put it in a meeting on the meaning of witchcraft, the magic is: “We know that we know.”**<p></div>
Federici on gossip and “The Transformation of Silence into Language and Action” by Audre Lorde In feminist movements the practice of speech and listening is very present. For many women it is difficult to express their inner thoughts, fears and opinions. These thoughts become internal voices that accumulate into anger and despair. These thoughts are becoming endless echoes of unrealized public speeches reverberating inside our bodies. In the text below, Audre Lorde speak with warmth and strength about turning this silence into language and action.<p>
<div style="color:#A19696">
**The Transformation of Silence into Language and Action[2]<p>
(...)In the cause of silence, each of us draws the face of her own fear — fear of contempt, of censure, or some judgment, or recognition, of challenge, of annihilation. But most of all, I think, we fear the visibility without which we cannot truly live. Within this country where racial difference creates a constant, if unspoken, distortion of vision, Black women have on one hand always been highly visible, and so, on the other hand, have been rendered invisible through the depersonalization of racism(...)<p>
Each of us is here now because in one way or another we share a commitment to language and to the power of language, and to the reclaiming of that language which has been made to work against us. In the transformation of silence into language and action, it is vitally necessary for each one of us to establish or examine her function in that transformation and to recognize her role as vital within that transformation.
For those of us who write, it is necessary to scrutinize not only the truth of what we speak, but the truth of that language by which we speak it. For others, it is to share and spread also those words that are meaningful to us. But primarily for us all, it is necessary to teach by living and speaking those truths which we believe and know beyond understanding. Because in this way alone we can survive, by taking part in a process of life that is creative and continuing, that is growth.
And it is never without fear — of visibility, of the harsh light of scrutiny and perhaps judgment, of pain, of death. But we have lived through all of those already, in silence, except death. And I remind myself all the time now that if I were to have been born mute, or had maintained an oath of silence my whole life long for safety, I would still have suffered, and I would still die. It is very good for establishing perspective.
And where the words of women are crying to be heard, we must each of us recognize our responsibility to seek those words out, to read them and share them and examine them in their pertinence to our lives. That we not hide behind the mockeries of separations that have been imposed upon us and which so often we accept as our own. For instance, “I cant possibly teach Black womens writing — their experience is so different from mine.” Yet how many years have you spent teaching Plato and Shakespeare and Proust? Or another, “Shes a white woman and what could she possibly have to say to me?” Or, “Shes a lesbian, what would my husband say, or my chairman?” Or again, “This woman writes of her sons and I have no children.” And all the other endless ways in which we rob ourselves of ourselves and each other.
We can learn to work and speak when we are afraid in the same way we have learned to work and speak when we are tired. For we have been socialized to respect fear more than our own needs for language and definition, and while we wait in silence for that final luxury of fearlessness, the weight of that silence will choke us.
The fact that we are here and that I speak these words is an attempt to break that silence and bridge some of those differences between us, for it is not difference which immobilizes us, but silence. And there are so many silences to be broken.**<p></div>
<i>[1] The extracts in this text were collected by Amy Pickles in the publication for the workshop "Eclectic Speech", where I was invited to do a session called "Speaking to the machine".</i><p>
<i>[2]Paper delivered at the Modern Language Associations “Lesbian and Literature Panel,” Chicago, Illinois, December 28, 1977. First published in Sinister Wisdom 6 (1978) and The Cancer Journals (Spinsters, Ink, San Francisco, 1980).</i>
<ul>
<li>Silvia Federici (2018) Witches, Witch-Hunting, and Women. Oakland, CA: Pm Press.
</li>
<li>Lorde, A. and Clarke, C. (2007) Sister Outsider: Essays and Speeches. Reprint edition. Berkeley, Calif: Crossing Press.
</li>
</ul>

@ -1,9 +1,11 @@
**TINA TALLON:. . . Newspapers and magazines repeatedly referred to women on air as “affected,” “stiff,” “forced,” and “unnatural.” . . . they asserted that women sounded “shrill,” “nasal,” and “distorted” on the radio, and claimed that womens higher voices created technical problems.**<p> **[1]TINA TALLON:. . . Newspapers and magazines repeatedly referred to women on air as “affected,” “stiff,” “forced,” and “unnatural.” . . . they asserted that women sounded “shrill,” “nasal,” and “distorted” on the radio, and claimed that womens higher voices created technical problems.**<p>
As Tina Tallon observes in 1927, the Federal Radio Commission decided to provide each station its own little 10000 hertz slice of bandwidth. So there's a segment . . . before you take the signal and modulate it into something that can be transferred . . . `{./radio-voice.sh}` in between stations known as the base band or the pre- modulated signal that had to be actually limited to 5000 hertz because amplitude modulation actually doubles the bandwidth of the signal. So initially they said, OK, we're gonna take all of our baseband signals and limit them to 5000 hertz. What that meant was all of the microphones and all of the equipment that people were using to record didn't need to go above 5000 hertz because none of that information would get transmitted.<p> As Tina Tallon observes in 1927, the Federal Radio Commission decided to provide each station its own little 10000 hertz slice of bandwidth. So there's a segment . . . before you take the signal and modulate it into something that can be transferred . . . `{./radio-voice.sh}` in between stations known as the base band or the pre- modulated signal that had to be actually limited to 5000 hertz because amplitude modulation actually doubles the bandwidth of the signal. So initially they said, OK, we're gonna take all of our baseband signals and limit them to 5000 hertz. What that meant was all of the microphones and all of the equipment that people were using to record didn't need to go above 5000 hertz because none of that information would get transmitted.<p>
**TINA TALLON: `{./lowpass.sh}`Steinbergs experiments showed that the voiceband frequencies reduced the intelligibility of female speech by cutting out the higher frequency components necessary for the perception of certain consonants. Steinberg asserted that “nature has so designed womans speech that it is always most effective when it is of soft and well modulated tone.” Hinting at the age-old notion that women are too emotional, he wrote that a womans raised voice would exceed the limitations of the equipment, thus reducing her clarity on air**. <p> **TINA TALLON: `{./lowpass.sh}`Steinbergs experiments showed that the voiceband frequencies reduced the intelligibility of female speech by cutting out the higher frequency components necessary for the perception of certain consonants. Steinberg asserted that “nature has so designed womans speech that it is always most effective when it is of soft and well modulated tone.” Hinting at the age-old notion that women are too emotional, he wrote that a womans raised voice would exceed the limitations of the equipment, thus reducing her clarity on air**. <p>
<i>[1]Extract taken from scripts that are part of the performance "Radioactive Monstrosities"</i>
<ul> <ul>
<li>Tallon, T. (2019) A Century of “Shrill”: How Bias in Technology Has Hurt Womens Voices, The New Yorker. Available at: https://www.newyorker.com/culture/cultural-comment/a-century-of-shrill-how-bias-in-technology-has-hurt-womens-voices <li>Tallon, T. (2019) A Century of “Shrill”: How Bias in Technology Has Hurt Womens Voices, The New Yorker. Available at: https://www.newyorker.com/culture/cultural-comment/a-century-of-shrill-how-bias-in-technology-has-hurt-womens-voices
</li> </li>

Loading…
Cancel
Save