Select Page

In July I am in Berlin taking the course The Neural Aesthetic at SchoolOfMa

Excerpt from the course description

Machine learning is a branch of artificial intelligence concerned with the design of data-driven programs which autonomously demonstrate intelligent behavior in a variety of domains.

 

Machine learning is the backbone that powers self-driving cars, content recommendation in social media, face identification in digital forensics, and countless other high-level tasks. It has gained rapid interest from the digital arts community, with the recent appearance of numerous artistic hacks of scientific research, such as Deepdream, Stylenet, NeuralTalk, and others.

 

Creative re-appropriation of these techniques is necessary to refocus machine learning’s influence on those things which we care about. Artistic metaphors help clarify that which is otherwise shrouded by layers of academic jargon, making these highly specialized subjects more accessible to everyday people. Taking such an approach, we can repurpose these academic tools and harness their capabilities for creative expression and empowerment.

I will attempt to update a couple of times a week with images and videos of prototypes and experiments I do at this course.

Week1

t-SNE image grid

Arranging 2000 sports images from Leeds Sports Pose Dataset in a grid with convnets and t-SNE.

Made in openFrameworks using ofxTSNE. Click to see bigger versions

Arranging 2000 sports images from Leeds Sports Pose Dataset in a grid with convnets and t-SNE.

Arranging 2000 portrait images from Aagaards (Kolding Stadsarkiv) in a grid with convnets and t-SNE.

Arranging 5916 items from a Danish supermarket in a grid using convnets and t-SNE.

Week2

Crushed dubstep controlled by hand position and gesture

Wekinator, Leap Motion, Ableton and ofxAbletonLive. The music composition is taken from Chapter 15 in the book Interactive Composition

FaceOSC Wekinator test for interfacing Ableton

Wekinator, FaceOSC, Ableton and ofxAbletonLive. The music composition is taken from Chapter 9 in the book Interactive Composition

Train your facial expressions

Silly little test showing how to classify facial expressions using Wekinator, FaceOSC and Processing.

Computer vision using neural networks and Wekinator

Training a image classifier using convnetOSC and wekinator

tSNE on audio

I ran a .mp3 file containing a 33 minute long concert of Mussorgsky – Pictures at an Exhibition through a python script that segments the audio by onset, analyses each chunk and arranges all samples in a 2D space, where similar sounding samples are grouped together.

Link to OpenFrameworks code and Python scripts here.

Audio Reactive Graphics Controlled by Drawn Shapes

A quick test where I control the shapes being generated from sounds by holding up simple drawings in front of a webcam. The webcam runs the image through convolutional neural networks using ConvnetOSC and outputs OSC to Wekinator, which I have trained it recognise the different shapes. Even though I only trained for less then a minute with about 100 examples, it is already fairly accurate. Audio analysis and graphics are made in Processing.

Dynamic Time Warping with a Arduino light sensor

A quick test where I have trained Wekinator to recognize simple gestures. I am sending raw readings + velocity from the light sensor.

Week3

Styletransfer

My first attempt at doing image style transfer

Input image

Style image

Evolution over iterations

Audio analysis

Using mfcc’s and wekinator to teach a program when to laugh while watching sitcoms 😀

Image classifier – is this food healthy

Trained Wekinator to distinguish images of healthy food from images of junk food. Only feed it a few hundred examples of each class, but it is already doing a decent job. Video upload seems unstable, so here is the link to the source.

Kadinsky Mirror

Got Gene Kogans Cubist Mirror up and running on my computer and experimented a bit with different models from this repository. See video here if it does not load.

Style transfer on movie clip