Some test made with a processing sketch by Amnon Owed
First test video (original video here)
Second test video (original video here)
Horizontal Slit Scanning:
Vertical Slit Scanning
Some JPG stills from the piano video:
This sketch is based on Kim Asendorf‘s pixel sorting algorithm. Photos are loaded from Instagram based on a set of hashtags, music is just an MP3 played through minim (Processing library) and the rest is generated in realtime by the program, in a similar way than this post. Scroll down for a bunch of content…
A bit of a different post here. My daughter really likes the jungle book and especially the dance scene in the temple ( http://youtu.be/O3Le7yqCOSY ) so I wanted to make a test and try to control a puppet with the Kinect so that she could eventually dance with Mowgli or Baloo or something when she’ll walk at some point.
This is my first attempt to play with openGL and Kinect data. The result is rather wild and intriguing… This sketch by itself is kind of pointless but opens some interesting doors for later; The GLGraphics library by Andres Colubri work wonderfully well and it really multiplies the possibilities of Processing. I will definitely dig deeper and try to make something a bit more “clean” with it :-) For example integrating different materials, play with different 3D shapes (spheres, proper quad_stripes and triangle_stripes, triangles mesh, etc…). make the shape / colors responsive to music / movements, improve the behavior of the 3D shapes, etc, etc… The result of the lighting is very interesting as well.
I’ve been playing around a bit with touchOSC and processing and tried to build a small led grid display on the iPad. The program is communicating with touchOSC on the iPad through oscP5 library: Processing is loading the image displayed on the screen, extract the greyscale values for each pixel and send it to the iPad, nothing very challenging. The only issue was that touchOSC is “loosing” a lot of messages when receiving, and the bundle system is not working very well. It is clearly more useful and reliable to use touchOSC as a controller to send message (to control processing variables) than to try to receive a big amount of messages with it.
A little sketch in reference to this project by Stefan Sagmeister. I tried to make a program that generate something similar. I originally wanted to make it run in real time (i.e: you type a letter and it appears on the screen instantly) but to achieve a result close to the original work of Stefan Sagmeister, each letter have to be scaled and smoothed to make them more round; The smoothing part is quite CPU intensive (see explanations below). I made a few experiments based on that sketch afterward where it was not a problem at all to render everything in real time (check them here) but for that one it’s a bit of a b****.
This program is also running some effects on a video stream, in this particular case it’s a video playing but it can also be the webcam feed (see this post)
Like the other post the program divides the image in pixel grids then each pixel have parameters that can be controlled (scaling, x y position of each corner of the polygon, fading speed, filled / outlined, etc…)
Alright so this program is running some effects on a video stream, in this particular case it’s the webcam feed but it can also be a video playing (see this post)
It doesn’t do anything complicated: dividing the image in pixel grids then each pixel have parameters that can be controlled (scaling, x y position of each corner of the polygon, fading speed, filled / outlined, etc…) basically a sum of effects put together make pretty tripping and unexpected results, as it’s often the case while playing with Processing. I plan to dig deeper on that one as well, an interesting effect to explore is to have each “pixel” or rather each polygon, act as intelligent agents with some rules and behaviors, on that particular sketch the only rule they follow is to go in one of 8 directions, but we can do much more interesting things than that. We could also play with gradients, alpha, and overlay effects, etc, etc… endless ;-)
Some experiments related to the Sagmeisterizer (see this post)
The text typed on the keyboard is converted in polygons (with Geomerative lib.) then we use the coordinate of each point of the polygon to update the vertices of a GLModel (GLGraphics lib.) that we then copy and scale along the z axis. Add a bit of random and stuff and here you go… There is a lot more to do with this sketch (as usual) I should parametrize some variables so that they can be altered in realtime with the keyboard or the mouse, play more with colors, etc, etc… But as the title state this is just an experiment! :-P
Here’s a basic sketch to experiment with 3D tracking with Kinect and Processing.
Nothing complicated: once the user is calibrated the camera is focusing on him and switching focus point every 5 seconds. Focus points can be any joints of the 3D skeleton (head, torso, hips, hand, etc…). After each change of focus the camera then makes a random motion (dolly, pan, zoom, etc…). If the user is detected but not calibrated the camera focuses on the user’s center of mass, if the user is neither detected nor calibrated then the camera focuses few meters away in front of the Kinect.
Just a simple test to compare the performances of native 3D render and Andres Colubri GLGraphics library.
The program displays 500 ‘shaking’ spheres with 3 different light types:
Point-light and Spot-light are following mouseX and Y.