For a recent motion graphics project that I was working on, I needed to use a sucking-in vacuum effect where some logos were being sucked into a TV. Not being a motion graphics expert and jumping into after effects for the first time in a while, I first went searching some forums to see if anyone else asked the question and had gotten answers. Indeed people have asked how to get this affect, but not too many useful answers were out there. Fortunately, I was able to put together something I was happy with and wanted to show anyone that’s interested how the effect was achieved.
I’ve been able to find some time to work on this month’s project and in the few hours I’ve been able to devote, it’s been pretty fruitful. Who uses the term “fruitful”? Whatever, it’s going well so far.
The steps that I outlined for myself were to:
Combine all of my past sketched into one project
Create some sort of visual feedback of live data
Record that data and create something cool with it
Step 1 was a lot easier than I thought it would be. I sometimes forget how easy processing is to work with. I had sketches for calibrating the accelerometer, saving the accelerometer data, and reading and displaying the data. I just created a class for each of those and called their update/draw functions when needed and that was pretty much it. That was only an hour or two worth of work. But really, that just got me to ground level so that I could actually be productive.
The biggest challenge with working with an accelerometer is understanding what the data your getting actually means. Originally, I had it in my mind that the accelerometer was measuring the point in space that it was in and the acceleration between 2 points. That’s sort of true, but after thinking about it and reading up some more, I realized that it’s more about the acceleration among the axis’ then the actual points, since that’s arbitrary information. The next part of thinking through what kind of useful data I could get from the acclerometer was turning those seemingly random numbers into useful numbers. For each axis, I was getting numbers is a range of 225 to 435 or so, which does’t mean anything as they are. What do those number mean really? What you have to do is turn those numbers into something else that makes sense. After going through some arduino forums posts, I found an equation that convert the numbers into a decimal that to stand for gravitational force (acceleration) based on the voltage and sensitivity of the accelerometer. Or something like that. Luckily, that info was easy to find. After working this in, it made a huge difference.
From there, I plotted the x, y, and z force onto a 3d axis and started turning those numbers into velocity values so that I can draw with the accelerometer.
I’m doing something that’s slowing down my processing sketch like crazy, so that’s the next thing to figure out. Once that’s done, I have ot go in and refactor some code and refine my data saving and reading classes. I’ll then pass this sketch to the artists that I’m working with, Craig Damrauer, so that he can start playing with it and coming up with some ideas to turn this data into something beautiful.
In my next update, I hope to share some sort of video demo of my progress. Stay tuned.
February 2012 Project
For my February project, I’ve decided to continue with a project that I’ve already started, but neglected over the last month or two. I don’t want to give everything away, but it’s about taking values from nature using an accelerometer hooked up to an arduino and using the data to create something visually interesting using processing. This is actually a collaboration with an artist that’s also super psyched about this project. He came to me with this idea and I’m here to help him pull it off.
Currently, this project exists as a sketch on an arduino board and 3 different processing sketches that each serve a very specific purpose. One for recording data, one for calibrating the accelerometer, and one for reading and displaying the data. This of course is not an ideal set up, but the processing environment has been great for splitting up the different functions and figuring out each piece separately.
At the end of the month, I hope to have some cool sketches and hopefully something worthy of creating a nice print of. Since this is a collaboration, it might take a few months to get to that point, but this should at least be a huge step towards being able to create some beautiful stuff with this data.
To get there, this is what I need to do:
Combine the previous processing sketches into one project/sketch.
Create an accurate representation of the incoming data from the arduino so that we better understand the data coming in as well as how it relates to the actual accelerometer movements.
Explore ways to showcase the data in a visually interesting way. I hope this will be a series of experiments. I’ll try to post these on my tumblr blog.
January Kinect Project – Results
My project for January 2012 is complete for now. The result is below:
The result is virtual pin art which takes the depth values from a kinect camera and translates them into a depth which is projected for each pin. This was put together using cinder and the kinect cinder block which is the freenect kinect library configured to work with cinder.
There were a few things that I learned, some of them super obvious, but will nonetheless help me make better decisions next time around:
– The kinect has a lot of inconsistencies, especially when it comes to depth data. There are some ways to make things smoother, but you’ll notice that there aren’t too many examples out there that rely on the precise kinect depth values.
– Processing is great for prototyping. Cinder is great for the real thing. Processing helped me figure out what was possible and helped get me there relatively quick, but once I had a lot of particles on screen, things quickly began to slow down and make my processor chug. Once I moved into c++ and cinder, those same processes began to run much smoother.
– Having side projects is hard to keep up with when you have a 1 year old at home and projects to do at work. The is obvious, but not so much when you’re thinking about all the things you want to learn and explore. That being said, I’m still committed to learning and experimenting as much as I can once my first two priorities are taken care of. Who needs to watch crappy reality TV anyway?
I don’t consider this a final piece, but more of a really good proof of concept. Since this was my first kinect, cinder, and C++ project, a lot of the time was learning the capabilities and workflow. I’m obviously not a C++ expert and I know I have a lot to learn, but this was a great way to learn it. For the second phase of this project, I’d like to bring in some 3d textures and shading. Really give it a metal pin art look. I’d also like to smooth out some of the depth map noise. I know I won’t be able to nail that down perfectly, but I’ve read about some methods to smooth that out a bit better within the limitations of the current kinect’s resolution and camera positioning issues. The hope for this is to throw it up somewhere that it can be part of an installation that people can walk up to and interact with and cycle through to see the imprint of previous visitors as well.
To see some of the work in progress, check out my tumblr. Once I clean up the code, I’ll post that somewhere too.
So this week, I actually made some good progress and am close to reaching my goal for this month. After jumping from using processing, to using cinder, I learned a ton. The biggest challenge was the jump into cinder and C++ in general. What took me a few hours to come up with in processing, took 3 or 4 days in cinder since I don’t really know C++, haven’t touched cinder in over a year, upgraded xcode and had to learn my way around it and have come across a lot of other weird errors and stupid mistakes that I’ve been making.
Luckily, I have the author of cinder, Andrew Bell, now working with me at The Barbarian Group again and has been a good sport with answering my silly questions and helping me though some of the learning process. If he wasn’t around to help me out, I’d probably be another week behind where I want to be.
So as far as progress goes, here’s a sample of some screenshots from the processing and cinder experiments.
I’ve been able to come up with a good proof of concept that what I had in mind will work and I just need to tidy it up and make it look good so that I can video capture it and write up my learnings. I’ve realized that with this project, I still have more to do, but there’s enough that it can be another month’s worth of side project, so that’s what I plan on doing. But first, I need to read up more on c++ and get more familiar with cinder. So then next blog post about this project will be the last for the month and will summarize what I’ve learned and showcase the output for this stage of the overall project.
So far, this month’s project is not moving along as quickly as I had hoped it would, but it’s ok. With the time I have had to work on stuff, I’ve already learned a lot. And because of what I learned, I’ve concluded that I have to modified my goal a bit.
To get going with this project, I’ve read a good chunk of Making Things See, which is an amazing starting point for anyone new to the Kinect, especially because it uses processing as the teaching environment. Through some of the examples in the book, I was able to piece together a quick little program that enables me to only grab image data that is within x distance and disregard the rest. It wasn’t a very complex program to put together, but once you have it working, the kinect’s limitations are obvious. At this point, it’s not perfect. I had in mind that you’d be able to easily mask out anything in the foreground, but there’s so much noise that it’s impossible to get any sort of clear defining outline. I feel like somehow averaging a person’s outline along with doing some image color comparison processing, you could get to a good point. The other problem is that the kinect’s rgb camera is also not very high-res, so even if you get close, the image isn’t going to look great.
My hope in the future is that there will be a kinect update with higher res and more depth data. In fact, I heard that an updated kinect is coming out very soon, though we’ll have to see what kind of improvements it really has. In addition to an updated kinect, I’d like to figure out if there’s a way to capture a photo with a decent DSLR and match up the data from the kinect with that to do be able to get some decent photos.
So after coming to these conclusions, I’ve decided that there were two directions I could move in. I could either continue exploring this project knowing that I have some pretty high technical fences to jump or I can take what I’ve learned so far and come up with a new direction to move in that’s more feasible. I’ve decided to pursue the second route, which entails using the kinect depth data in a way that could be visually cool and fun to play with, while also having the potential to be a nice little installation. This route also would include the use of a little openGL, which I’ve wanted to learn more about too.
I had the pleasure of working with my wife on a project for the first time this week. It was only about a day’s worth of work, but so far it’s been the most viral thing I’ve ever been a part of. And also had the quickest visitor drop off due to it’s timely nature.
The project was the Bey Bey Name Generator (beybeyname.com). It’s a simple little web app that was inspired by Jay Z and Beyonce giving their new baby the ridiculous name of Blue Ivy. My wife heard this and said to me almost immediately that we should make a website that lets you come up with your own name using the same equation of Color + Plant = Silly Ass baby name.I found some spare time to design and code it and threw it online. It’s crazy how simple the concept and execution was and how people seemed to really dig it.
In the 3 days since we let it loose, it’s generated 27,000+ views, mostly coming from facebook, twitter, the huffington post, and hlntv.com. While it was taking off, I kept my eyes on the live google analytics stuff to see where it was blowing at the time. I suggest anyone that’s launching a site to check that out. It’s actually really cool (if you find graphs and statistics and that sort of thing cool). But my favorite part of the whole thing was when Beavis and Butthead tweeted about it.
I don’t plan on doing any other of these quick little projects anytime soon, but if the moment arrives, I’ve learned that you just have to run with it before it’s old news.
Project 1 – January 2012
As mentioned in my previous post, my plan for 2012 is to produce a side project every month. For January, I’m deciding to create something with the XBox Kinect. It’s something that I’ve wanted to do since I saw some of the amazing stuff that people have been doing all over the internet over the past year or so since the Kinect was released. I started reading the book Making Things See last month and have done a few tutorials and will use it as a starting point for this little project.
What I plan to do is use processing to create a program that takes a photo, removes the background and replaces it with another background. Pretty much what a green screen does, without the green screen. What is replaced and what happens with that photo is gonna be a surprise. If it works out the way I want it to, I’d like to develop it more and make it into some sort of installation that can be used at work or where ever anyone wants to set it up.