Social media generated art in Python #ThisIsMyClassroom #Programming #STEAM

Social media generated art in Python #ThisIsMyClassroom #Programming #STEAM

For the third blog post on this topic I wanted to use Python to generate different pieces of art without relying entirely on the random function. I decided to use the tweepy library, mainly because I had already used it to post content to Twitter but had never investigated how it could be used to read information back from Twitter.

It didn’t take long to find out how to read the latest 10 tweets from my own timeline using Python. Then I split the individual words into a list and sorted them into alphabetical order (for no real reason at the moment, but frequency analysis will follow!). Then I used the write method from the Turtle graphics library to place each word at a random location on the screen. This was my first attempt:

Screen Shot 2016-04-03 at 23.50.08

A bit tricky to read the words I thought. And I’d accidentally forgotten to penup before moving the turtle. However this accidental vector spider web became part of the artwork (because when I removed it, it looked quite boring).

A little while later I was able to change the font size at random (I changed the font to palatino after experimenting with a few others) and changing the pencolor in the same way as previous Python art programs changed the text colour too.

Screen Shot 2016-04-03 at 23.53.35

I had a lot of text to display, even just from 10 tweets, so I thought of ways to reduce the amount. I wrote a little Python subroutine that removed hashtags, mentions and URLs (as well as any other non ASCII text) and that was enough!

The video below shows the program in action. I decided to make a video this time because you can make out the individual words much more clearly at the beginning of the drawing than at the end!

As before the code is now on github (with my tweepy details removed for security). I’ve left in a commented out section of code that allows you to run a search for a keyword, hashtag or phrase instead of taking the latest timeline so you can experiment.

Any comments or improvements would be much appreciated!

SOUND GENERATED ART IN PYTHON #THISISMYCLASSROOM #PROGRAMMING #STEAM

SOUND GENERATED ART IN PYTHON #THISISMYCLASSROOM #PROGRAMMING #STEAM

I had a lot of fun experimenting with the subroutines and Python Turtle methods yesterday but wanted to push it a little further and find out if I could make use of a new Python library to help create automated art.

Somehow I’ve never built a program that utilises and analyses audio before, so challenged myself to find out more about libraries such as PyAudio and Wave this afternoon. My daughter was practising piano in the other room so it gave me a push to integrate live audio into my solution, rather than rely on pre-recorded wav files.

I learned about numpy a little this afternoon too. I hadn’t realised it had functions to extract the frequency from an audio block (FFT). The more I explore Python, the more I fall in love with it as a language!

Once I’d successfully extracted numeric frequencies from the 5 second wave file into a list I looped through them and attempted to place shapes on the Python Turtle screen to correlate with the current frequency. I decided on a simple X axis plot to begin with but then, as I realised the range between min and max frequencies usually exceeded 8000 I introduced a scale factor so they could be seen on the screen together and adjusted the Y axis so that each frequency appeared bottom to top in the order of analysis.

Screen Shot 2016-03-31 at 18.18.40

Quite nice, but there’s a lot of white space where the unused frequency range lies. Instead of removing this range from the visualisation (which, in retrospect, might have been a good idea) I decided to attempt to create ghosts of the circles fading out as they get further from the original position. This led me into colorsys and all sorts of bother, reminding me (eventually) not to mess with anything that returns a Tuple until I convert it back to a List first. Anyway, I removed that part of the code and put my arty effects on the back burner. You can see one example of the mess below. Ugh.

Screen Shot 2016-03-31 at 18.19.00

I decided to alter the colour of the background this time too. I think I’d like to use some audio analysis to decide on the colour range in a future version so that low audio frequencies create darker images and high frequencies create bright, bubblegum pop images.

Screen Shot 2016-03-31 at 18.06.42

The last thing I added to the program was the option to use pre-recorded audio WAV files instead of always recording 5 seconds of audio. This was very easy to add as I’d modularised the code as I went, so all that was needed was a few lines extra in the main program:

Screen Shot 2016-03-31 at 19.08.33

Trying out the program with a few WAV files from www.findsounds.com or playing a YouTube video in the background resulted in the following images:

chimpanzee.wav
chimpanzee.wav
uptown funk
uptown funk

Python files can be found at Github – https://github.com/familysimpson/PythonArt/. Feel free to fork the code, leave comments below or just enjoy the images it generates!

Computer Generated Art #thisismyclassroom #programming #steam

Computer Generated Art #thisismyclassroom #programming #steam

Screen Shot 2016-03-31 at 02.22.22

I wanted to create a task that allowed students to create a computer program in Python that would automatically create its own artwork but be customisable so that each student could experiment and personalise their own program to their tastes.

Screen Shot 2016-03-31 at 02.15.10

It’s a rough Python 3 program using the Turtle library and an array of Turtles but so far it has produced some really nice work. In the images shown below the program uses a user-defined function that draws a randomly sized square. I thought this would be easy for the students to understand and hack into something new!

Screen Shot 2016-03-31 at 02.15.24

Of course art can be created as a response to an external stimulus so a possible extension of this program would be to get input from the user (colours, mood, age) or calculate a range of colours from an input sensor or device (temperature, time, image).

Screen Shot 2016-03-31 at 02.15.38

The code is below! Any suggestions or improvements would be appreciated!

import turtle
import random
wn = turtle.Screen()
w = wn.window_width()
h = wn.window_height()

t1 = turtle.Turtle()
t2 = turtle.Turtle()
t3 = turtle.Turtle()
t4 = turtle.Turtle()
t5 = turtle.Turtle()
t6 = turtle.Turtle()

turtles = [t1, t2, t3, t4, t5, t6]

def square(item, size):
for x in range(4):
item.forward(size)
item.right(90)
item.forward(size)
item.left(random.randrange(-180, 180))

wn.tracer(False)
for iteration in range(3):
for item in turtles:
item.penup()
item.goto(random.randrange(-w,w),random.randrange(-h,h))
item.color(random.randrange(0,255)/255.,random.randrange(0,255)/255.,random.randrange(0,255)/255.)
item.pendown()
wn.tracer(False)
for move in range(2500):
for item in turtles:
item.speed(0)
square(item,random.randrange(5,25))
wn.tracer(True)

wn.exitonclick()

Screen Shot 2016-03-31 at 02.51.34

Screen Shot 2016-03-31 at 02.54.57