image

unfamiliar virtual convenience

voice assistants and speculative futures

How would giving new voices to our devices change our interactions and relationships with them?

• 27. May - 24. June 2020
• Online!
• Five-weeks, Wednesdays, 7-9PM China Standard Time, 1-3PM CET
• Small class of participants


Pricing (For tickets see below)
Artist / Student (Full Time)*
€135

Freelancer*
€155

Professional*
€175

Generous Supporter Ticket*
€245

Solidarity ticket*
Donation (Limited)


course
description

The current circumstances of our lives redefine the ways in which we live online. Simultaneously, we have a chance to reflect upon our home ecologies and the connected devices that inhabit them.

In recent years, the adoption of voice-enabled devices has grown significantly. It's predicted that by sometime later this year a third of all web-browsing will be done without a screen and that by 2022, more than half of households are expected to own smart devices with integrated virtual assitants. This begins to sound plausible when we consider that many people who already own and use these devices say it feels natural talking to it, with some even claiming it feels like talking to a friend.

Artificially synthesised voice brought our interactions with personal devices beyond push buttons, twisting and swiping. Nonetheless, while mediating notions of service, access to knowledge, and digital companionship, currently voice assistants are reduced to trivial, task-oriented power plays: order me this, turn on that, play these, entertain me.

The course will aim to reconsider voice assistants as subjects rather than objects and allow us to create more intimate relationships with them. Through speculative and critical design, we will temporarily detach from the limitations of existing technologies in order to speculate on what voice assistants could become. We will then set up our own open-source voice assistants using machine learning and engage in home experiments enabling their training and growth towards our imagined scenarios. Finally, we will connect our devices and their knowledge into a common pool, allowing them—and us—to share with and through each other; a community network of voice assistants with its own notion of proximity and relatedness.

There will be time and space for questions, debates and conversations throughout the course.

Please note that this class takes place 7pm-9pm China Standard Time as one of the instructors is Shanghai-based. To convert for your timezone, feel free to use this timezone converter. Classes will be recorded in case you're unable to attend each session live.


course outline

Week 1: A moment to get to know each other.

This session is about getting to know each other and learning about everyone’s expectations. What brought you to this course? What is it that you hope to learn and practice? The second half of the first session will include an introduction into voice assistant technology, presenting relevant creative projects, and critical inquiries.

Week 2: What If

Futures and critical design exercise: What could a voice assistant become? This session will focus on generating possible and plausible briefs for ourselves through a series of creative props and constraints using speculative and critical design methodologies.

Week 3: Hands-On

This session will focus on the basic setup of an open-source voice assistant. In this session, we will get to work, testing, and combining some of our prepared modules. We will be using Mycroft open-source voice assistant on Linux (or Virtual Box for those using MacOS or Windows), Google Collaboratory, and Google Drive.

Week 4: Your Voice Assistant Grows Up

In this session, we will set up a machine learning environment for our voice assistant to learn from what we tell them, whether it be poetry, children’s tales, or Youtube comments. We will connect our devices into a pool, so we can access and engage with the shared knowledge.

Week 5: Voice Assistant Jam

In this final session, we will present and record our results, talk to our voice assistants and explain our training methods. We will discuss what worked, what didn’t, and what the next steps could look like. The self graph of each device will be introduced as a tool to visualise its state of being, or what it has learned so far.

By the end of the course we hope for each participant to get a taste of how to yield collective learning results and how to support the emergence of collective knowledge within their own teams.


who is this
class for?

People interested in voice-assistant technology and its future(s), and those with general interest in the potential of home devices beyond utilitarianism. Creative and technical domain students, beginner-to-intermediate creative coders. Internet of things enthusiasts and scepticists. People interested in speculative and futures design exercises, and how their outcomes can tangibly feed back into the present.


about live classes

Classes are 'live' meaning that you can directly interact with the instructor as well as with the other participants from around the world. Classes will also be recorded for playback in case you are unable to attend for any reason. For specific questions, please email us and we'll get back to you as soon as we can.


about fees

We realise we're living in uncertain times. We are a small organisation with no outside funding and like many, we are also in survival mode. During this time, we are offering a limited number of pay-what-you-can solidarity tickets for this online class. Preference is given to women, POC, LGBTQ+ and persons from underrepresented communities in tech who would otherwise be unable to attend.

We have added a generous supporter ticket for anyone interested in helping to subsidize the cost of our solidarity tickets. We are greatly appreciative of your support.


about tickets

Tickets for this class are currently available via Eventbrite. If you would like to avoid Eventbrite fees, please email us for direct payment options.