Date | Topic |
---|---|
11.4 | Introduction |
18.4 | Art & Artefacts |
25.4 | Tools & instruments |
2.5 | Material |
9.5 | Modules and components |
16.5 | Making the Itty Bitty Beat Box |
23.5 | ECDF visit - Wilhelmstrasse 67 |
30.5 | NO COURSE (Christihimmelfahrt) |
6.6 | Format |
13.6 | Shell |
20.6 | Berlin Open Lab - Einstein Ufer UdK |
27.6 | Optimizing & testing |
4.7 | Goal |
WiSe 2018/2019 | Bootstrapping & exploring |
SoSe 2019 | Playing, specifying, defining |
WiSe 2019/2020 | E-paper |
SoSe 2020 | Machine learning, speech technologies, handwriting recognition |
WiSe 2020/2021 | Testing & optimizing |
SoSe 2021 | Deploying |
WiSe 2021/2022 |
??? |
https://mutantc.gitlab.io/index.html
Over the years we’ve seen the Raspberry Pi crammed into almost any piece of hardware you can think of. Frankly, seeing what kind of unusual consumer gadget you can shoehorn a Pi into has become something of a meme in our circles. But the thing we see considerably less of are custom designed practical enclosures which actually play to the Pi’s strengths. Which is a shame, because as the MutantC created by [rahmanshaber] shows, there’s some incredible untapped potential there.
The MutantC features a QWERTY keyboard and sliding display, and seems more than a little inspired by early smartphone designs. You know, how they were before Apple came in and managed to convince every other manufacturer that there was no future for mobile devices with hardware keyboards. Unfortunately, hacking sessions will need to remain tethered as there’s currently no battery in the device. Though this is something [rahmanshaber] says he’s actively working on.
The custom PCB in the MutantC will work with either the Pi Zero or the full size variant, but [rahmanshaber] warns that the latest and greatest Pi 4 isn’t supported due to concerns about overheating. Beyond the Pi the parts list is pretty short, and mainly boils down to the 3D printed enclosure and the components required for the QWERTY board: 43 tactile switches and a SparkFun Pro Micro. Everything is open source, so you can have your own boards run off, print your case, and you’ll be well on the way to reliving those two-way pager glory days.
We’re excited to see where such a well documented open source project like MutantC goes from here. While the lack of an internal battery might be a show stopper for some applications, we think the overall form factor here is fantastic. Combined with the knowledge [Brian Benchoff] collected in his quest to perfect the small-scale keyboard, you’d have something very close to the mythical mobile Linux device that hackers have been dreaming of.
Keyboards:
https://hackaday.io/project/158454-mini-piqwerty-usb-keyboard
https://hackaday.com/2019/04/23/reaction-video-build-your-own-custom-fortnite-controller-for-a-raspberry-pi/
WiSe 2018/2019 | Bootstrapping & exploring |
SoSe 2019 | Playing, specifying, defining |
WiSe 2019/2020 | E-paper |
SoSe 2020 | Machine learning, speech technologies, handwriting recognition |
WiSe 2020/2021 | Testing & optimizing |
SoSe 2021 | Deploying |
WiSe 2021/2022 |
??? |
In the middle of a battle there is a company of Italian soldiers in the trenches, and a commander who issues the command “Soldiers, attack!” He cries out in a loud and clear voice to make himself heard in the midst of the tumult, but nothing happens, nobody moves. So the commander gets angry and shouts louder: “Soldiers, attack!” Still nobody moves. And since in jokes things have to happen three times for something to stir, he yells even louder: “Soldiers, attack!” At which point there is a response, a tiny voice rising from the trenches, saying appreciatively “Che bella voce!” “What a beautiful voice!” - excerpt from the book 'A Voice and nothing more' by Mladen Dolar
In this course, we are going to make your own digital artefacts in the scope of your personal interests in Voice and Speech.
Throughout the seminar, we will build our own personal speech recognition system based on machine learning which can understand (more specifically „transcribe“) human speech as a medium for our artistic practice. For the first half of the seminar, students will be introduced a domain of Automatic Speech Recognition (ASR) technology, and also diverse ways how speech-to-text (STT) inferences can be realized on non-cloud, local (i.e. edge-computing) architectures. It means that python programming and basic unix command line skills will be involved.
By the time we will have developed our own system, we will dive into making artfacts (media installation, educational device, musical instrument, performative material, sound works, etc, of your choice) where human and machine will communicate in human voice and speech as the second half.
*Please register for the seminar before the semester starts by e-mail.
*Seminar starts on 27.10
*A Raspi 4B and a ReSpeaker 2-Mics Pi HAT will be given to each person during the semester period.
*The seminar takes place time to time at Berlin Open Lab (Einsteinufer 43)
/