Personal Primer (digitale Fibel, fantastische Fibel u.s.w.) is a digital artefact aiming to enrich narrative, mathematical and musical intelligence of a Grundschule pupil. It instantiates 23 attributes divided into...
Make-Your-Own-Device (M.Y.O.D) and upcycling approaches will be combined to attain our common goal.
Keywords: digital artefacts, raspberry PI zero, upcycling, make-your-own-device, creativity, touchless man-machine interaction, zone of proximal development, electronic ink, algorithmic drum circle
In this course, we are going to follow some nice O'Reilly data science manual and, line by line, learn about meaning of terms like "feature", "multi-class classification", "training" and "cross validation" and, while doing so, acquire all necessary prerequisities of "the most sexy job of 22nd century".
We start this Friday (24th April) at 10:00 am
In the middle of a battle there is a company of Italian soldiers in the trenches, and a commander who issues the command “Soldiers, attack!” He cries out in a loud and clear voice to make himself heard in the midst of the tumult, but nothing happens, nobody moves. So the commander gets angry and shouts louder: “Soldiers, attack!” Still nobody moves. And since in jokes things have to happen three times for something to stir, he yells even louder: “Soldiers, attack!” At which point there is a response, a tiny voice rising from the trenches, saying appreciatively “Che bella voce!” “What a beautiful voice!” - excerpt from the book 'A Voice and nothing more' by Mladen Dolar
In this course, we are going to make your own digital artefacts in the scope of your personal interests in Voice and Speech.
Throughout the seminar, we will build our own personal speech recognition system based on machine learning which can understand (more specifically „transcribe“) human speech as a medium for our artistic practice. For the first half of the seminar, students will be introduced a domain of Automatic Speech Recognition (ASR) technology, and also diverse ways how speech-to-text (STT) inferences can be realized on non-cloud, local (i.e. edge-computing) architectures. It means that python programming and basic unix command line skills will be involved.
By the time we will have developed our own system, we will dive into making artfacts (media installation, educational device, musical instrument, performative material, sound works, etc, of your choice) where human and machine will communicate in human voice and speech as the second half.
*Please register for the seminar before the semester starts by e-mail.
*Seminar starts on 27.10
*A Raspi 4B and a ReSpeaker 2-Mics Pi HAT will be given to each person during the semester period.
*The seminar takes place time to time at Berlin Open Lab (Einsteinufer 43)
Coded in vim (front-end: D3.js; back-end: kastalia.medienhaus) by Prof. Daniel D. Hromada (UdK / ECDF).