School of machines, making & make–believe

How can a deeper understanding of machine learning affect our relationships with machines and with each other?

  • / 6 August - 31 August 2018

  • / four weeks, full-time in Berlin, Germany

  • / 10 spots left, apply soon!

  • / Based in LIEBIG12

APPLY


Autonomous Generative Spirit is an intensive four-week program led by artists Gene Kogan and Andreas Refsgaard.

Machine learning is a branch of artificial intelligence concerned with the design of data-driven programs which autonomously demonstrate intelligent behavior in a variety of domains.

Machine learning systems are all around us, silently underpinning the fabric of our digital infrastructure, discriminating spam e-mail and banking fraud, making light-speed transactions in the global financial market, guiding self-driving vehicles, recommending music and films for customers to buy, deciding what search results are relevant to your queries, and countless more of the daily interactions with electronic media that we take for granted.

Machine learning has gained rapid interest from the digital arts community, with the recent appearance of numerous artistic hacks of scientific research, such as Deepdream, Stylenet, NeuralTalk, Pix2Pix, WaveNet, and others. Creative re-appropriation of these techniques is necessary to refocus machine learning's influence on those things which we care about.

Artistic metaphors help clarify that which is otherwise shrouded by layers of academic jargon, making these highly specialized subjects more accessible to everyday people. Taking such an approach, we can repurpose these academic tools and harness their capabilities for creative expression and empowerment.

Most interestingly, machine learning enables large-scale collaboration, allowing many separate components to be integrated into a common creation. This course will explore this further.

Course Description

In the first half of this course, students will be introduced to the field of machine learning as a subject for artistic practice and interdisciplinary research. In Week 1, students will learn how to program self-adapting instruments for real-time musical and visual expression, using Wekinator along with a suite of real-time software applications from ml4a.

In Week 2, we will switch focus onto deep generative models, a class of algorithms for visual, sound, and text-based synthesis. We will use various libraries made by deep learning researchers, and learn how to connect their outputs to the instruments we built in Week 1. Additionally, we will also give a practical and conceptual understanding of machine learning theory, explaining, analyzing, and dissecting simple and deep neural networks.

The goal of the second half of the course will be for the whole class and the instructors to collaborate on a single, unified, large-scale, audiovisual interactive installation to be exhibited on the final day of the course: what we call it an “autonomous generative spirit.”

Towards that end, Week 3 will mainly focus on individual and small-team group work to begin creating the components of the spirit, interspersed by optional tutorials as needed, with the instructors working closely with the students to help them iterate through their ideas.

During week 4, the entire class will collaborate to integrate their ideas into a single installation, with the instructors paying close attention to overall architecture and helping each participant adapt their work for the spirit.

We'll examine the ethical and sociocultural dimensions of machine learning, and discuss the coming issues which are sure to be preceded by the ever-increasing integration of these thinking machines into our daily lives.

Course Outline

Week 1: Supervised machine learning from a real-time interaction design perspective.

Week 2: Deep learning, generative models for real-time generative media.

Week 3: Special topics, selected tutorials, individual and small group projects.

Week 4: More group work, creation and exhibition of the autonomous generative spirit.

Who is this program for?

This course is aimed at people working in creative disciplines who wish to learn about machine intelligence and how to apply it in their own fields. It is *not* aimed at scientists or engineers who are seeking a rigorous technical course on machine learning -- plenty of such classes already exist. Instead, no specialized knowledge of mathematics or computer science is assumed or expected of students, and we will build up our understanding of the subject from elementary building blocks, imagination, analogy, and metaphor.

This course is more practical than it is theoretical; we are interested less in proving theorems and equations, and more into hacking existing tools for making machines that do interesting things. Additionally, this program will emphasize group work and collaboration.

People of diverse backgrounds and interests will all find something to take away from this class. If you are a journalist interested in the socioeconomic ramifications of increased automation, a musician wanting to manipulate your instruments with data streams, a designer wishing to imbue your craft with machine artifacts, or you’re just plain old fascinated by the age-old philosophical dilemma of cognition, this class is for you.

Pricing

  • Artist / Student (Full Time)*
    €1950
    Professional*
    €2250

  • Women and persons from LGBTQ+ and other under-represented communities in the tech field highly encouraged to apply!

    *Includes in-class materials, use of space, and professional mentorship
    Note: If you'd like us to seek out your accommodation for the month, please add €525 to the above fee.

Related Links

About ML4A
About Gene Kogan
About Andreas Refsgaard

Experiments with neural channel synthesis created for the creativity exhibition at NIPS conference in 2017.

Doodle Tunes lets you turn doodles (drawings) of musical instruments into actual music. A camera looks at your drawing, detects instruments that you have drawn, and begins playing electronic music with those instruments.

Cubist Mirror Fast-style-transfer on your webcam.

An algorithm watching a movie trailer What happens when an object detection algorithm watches a movie trailer?

APPLY

Instructor

  • Gene Kogan / genekogan.com

    Gene Kogan is an artist and programmer who is interested in generative systems and the application of emerging technology into artistic and expressive contexts. He writes code for live music, performance, and visual art. He contributes to open-source software projects and gives workshops and demonstrations on topics related to code and art.

    He is a contributor to openFrameworks, Processing, and p5.js, an adjunct professor at Bennington College and ITP-NYU, and a former resident at Eyebeam.

  • Andreas Refsgaard / andreasrefsgaard.dk/

    Andreas is an interaction designer and creative coder from Denmark. His background is in sociology, but since graduating from Copenhagen Institute of Interaction Design he have been working as an interaction designer and is currently exploring artistic applications of machine learning. Andreas is the creator of Eye Conductor which won best student project at IxDA 2017.

BACK TO PROGRAMS