The Nature of Code for Cinder

Last year, Daniel Shiffman released The Nature of Code, a book that explores and explains how to simulate systems that appear in nature using processing. In the intro of the book, he made a declaration to anyone that cared to contribute to convert all of the examples in the book into other relevant languages that might be useful to people, be it flash, javascript, openframeworks, etc. I took this as a chance to learn the principles taught in the book and convert everything to Cinder.

I’m still working on it in my spare time, but I’ve made some progress. You can check out the first 4 chapters on github. I hope to have the next chapter done in December. Enjoy.

And if you haven’t checked it out yet, take a look at The Nature of Code. Buy the ebook or print copy. Whatever. Dan does a great job breaking down the principles that are necessary to simulating these different systems in a way that’s accessible and makes you actually feel kind of smart. It’s quite great.

February 2012 Project – Update 2

The furthest I’ve been able to get with this project for the month was to create a decent tool for Craig to use. I put together a quick video demo below. It’s not super flashy and is still in an early phase, but it allows us to do a ton more much more quickly than prior to this month.

So the way that this works is that there are 2 modes as you’ll see in the upper right hand corner when you first run it. Record and Read mode.

When in the default “Record” mode, you’ll see a white dot, starting in the center of the screen. When you have the accelerometer/arduino plugged in, you should see the white dot move around based on the accelerometer data coming in. There is a file in the project folder called “calibration.txt” that saves the calibration values each time to recalibrate the arduino board. To do the calibration, press ‘c’. Keep the accelerometer as flat and horizontal as you can and it will average the values over the 10 seconds that it runs. Once done, it saves that data to the text file. If that text file isn’t found, the calibration runs right away.

Read More

February 2012 Project – Update 1

I’ve been able to find some time to work on this month’s project and in the few hours I’ve been able to devote, it’s been pretty fruitful. Who uses the term “fruitful”? Whatever, it’s going well so far.

The steps that I outlined for myself were to:

  1. Combine all of my past sketched into one project
  2. Create some sort of visual feedback of live data
  3. Record that data and create something cool with it

Step 1 was a lot easier than I thought it would be. I sometimes forget how easy processing is to work with. I had sketches for calibrating the accelerometer, saving the accelerometer data, and reading and displaying the data. I just created a class for each of those and called their update/draw functions when needed and that was pretty much it. That was only an hour or two worth of work. But really, that just got me to ground level so that I could actually be productive.

The biggest challenge with working with an accelerometer is understanding what the data your getting actually means. Originally, I had it in my mind that the accelerometer was measuring the point in space that it was in and the acceleration between 2 points. That’s sort of true, but after thinking about it and reading up some more, I realized that it’s more about the acceleration among the axis’ then the actual points, since that’s arbitrary information. The next part of thinking through what kind of useful data I could get from the acclerometer was turning those seemingly random numbers into useful numbers. For each axis, I was getting numbers is a range of 225 to 435 or so, which does’t mean anything as they are. What do those number mean really? What you have to do is turn those numbers into something else that makes sense.  After going through some arduino forums posts, I found an equation that convert the numbers into a decimal that  to stand for gravitational force (acceleration) based on the voltage and sensitivity of the accelerometer. Or something like that. Luckily, that info was easy to find. After working this in, it made a huge difference.

From there, I plotted the x, y, and z force onto a 3d axis and started turning those numbers into velocity values so that I can draw with the accelerometer.

I’m doing something that’s slowing down my processing sketch like crazy, so that’s the next thing to figure out. Once that’s done, I have ot go in and refactor some code and refine my data saving and reading classes. I’ll then pass this sketch to the artists that I’m working with, Craig Damrauer, so that he can start playing with it and coming up with some ideas to turn this data into something beautiful.

In my next update, I hope to share some sort of video demo of my progress. Stay tuned.

February 2012 Project

For my February project, I’ve decided to continue with a project that I’ve already started, but neglected over the last month or two. I don’t want to give everything away, but it’s about taking values from nature using an accelerometer hooked up to an arduino and using the data to create something visually interesting using processing. This is actually a collaboration with an artist that’s also super psyched about this project. He came to me with this idea and I’m here to help him pull it off.

Currently, this project exists as a sketch on an arduino board and 3 different processing sketches that each serve a very specific purpose. One for recording data, one for calibrating the accelerometer, and one for reading and displaying the data. This of course is not an ideal set up, but the processing environment has been great for splitting up the different functions and figuring out each piece separately.

At the end of the month, I hope to have some cool sketches and hopefully something worthy of creating a nice print of. Since this is a collaboration, it might take a few months to get to that point, but this should at least be a huge step towards being able to create some beautiful stuff with this data.

To get there, this is what I need to do:

  • Combine the previous processing sketches into one project/sketch.
  • Create an accurate representation of the incoming data from the arduino so that we better understand the data coming in as well as how it relates to the actual accelerometer movements.
  • Explore ways to showcase the data in a visually interesting way. I hope this will be a series of experiments. I’ll try to post these on my tumblr blog.

January Kinect Project – Results

My project for January 2012 is complete for now. The result is below:

The result is virtual pin art which takes the depth values from a kinect camera and translates them into a depth which is projected for each pin. This was put together using cinder and the kinect cinder block which is the freenect kinect library configured to work with cinder.

There were a few things that I learned, some of them super obvious, but will nonetheless help me make better decisions next time around:

– The kinect has a lot of inconsistencies, especially when it comes to depth data. There are some ways to make things smoother, but you’ll notice that there aren’t too many examples out there that rely on the precise kinect depth values.

– Processing is great for prototyping. Cinder is great for the real thing. Processing helped me figure out what was possible and helped get me there relatively quick, but once I had a lot of particles on screen, things quickly began to slow down and make my processor chug. Once I moved into c++ and cinder, those same processes began to run much smoother.

– Having side projects is hard to keep up with when you have a 1 year old at home and projects to do at work. The is obvious, but not so much when you’re thinking about all the things you want to learn and explore. That being said, I’m still committed to learning and experimenting as much as I can once my first two priorities are taken care of. Who needs to watch crappy reality TV anyway?

I don’t consider this a final piece, but more of a really good proof of concept. Since this was my first kinect, cinder, and C++ project, a lot of the time was learning the capabilities and workflow. I’m obviously not a C++ expert and I know I have a lot to learn, but this was a great way to learn it. For the second phase of this project, I’d like to bring in some 3d textures and shading. Really give it a metal pin art look. I’d also like to smooth out some of the depth map noise. I know I won’t be able to nail that down perfectly, but I’ve read about some methods to smooth that out a bit better within the limitations of the current kinect’s resolution and camera positioning issues. The hope for this is to throw it up somewhere that it can be part of an installation that people can walk up to and interact with and cycle through to see the imprint of previous visitors as well.

To see some of the work in progress, check out my tumblr. Once I clean up the code, I’ll post that somewhere too.

Related Posts:
Project 1 – January 2012
January Kinect Project – Update 1
January Kinect Project – Update 2

January Kinect Project – Update 2

So this week, I actually made some good progress and am close to reaching my goal for this month. After jumping from using processing, to using cinder, I learned a ton. The biggest challenge was the jump into cinder and C++ in general. What took me a few hours to come up with in processing, took 3 or 4 days in cinder since I don’t really know C++, haven’t touched cinder in over a year, upgraded xcode and had to learn my way around it and have come across a lot of other weird errors and stupid mistakes that I’ve been making.

Luckily, I have the author of cinder, Andrew Bell, now working with me at The Barbarian Group again and has been a good sport with answering my silly questions and helping me though some of the learning process. If he wasn’t around to help me out, I’d probably be another week behind where I want to be.

So as far as progress goes, here’s a sample of some screenshots from the processing and cinder experiments.

I’ve been able to come up with a good proof of concept that what I had in mind will work and I just need to tidy it up and make it look good so that I can video capture it and write up my learnings. I’ve realized that with this project, I still have more to do, but there’s enough that it can be another month’s worth of side project, so that’s what I plan on doing. But first, I need to read up more on c++ and get more familiar with cinder. So then next blog post about this project will be the last for the month and will summarize what I’ve learned and showcase the output for this stage of the overall project.

See:

January Kinect Project – Update 1

So far, this month’s project is not moving along as quickly as I had hoped it would, but it’s ok. With the time I have had to work on stuff, I’ve already learned a lot. And because of what I learned, I’ve concluded that I have to modified my goal a bit.

To get going with this project, I’ve read a good chunk of Making Things See, which is an amazing starting point for anyone new to the Kinect, especially because it uses processing as the teaching environment. Through some of the examples in the book, I was able to piece together a quick little program that enables me to only grab image data that is within x distance and disregard the rest. It wasn’t a very complex program to put together, but once you have it working, the kinect’s limitations are obvious. At this point, it’s not perfect. I had in mind that you’d be able to easily mask out anything in the foreground, but there’s so much noise that it’s impossible to get any sort of clear defining outline. I feel like somehow averaging a person’s outline along with doing some image color comparison processing, you could get to a good point. The other problem is that the kinect’s rgb camera is also not very high-res, so even if you get close, the image isn’t going to look great.

My hope in the future is that there will be a kinect update with higher res and more depth data. In fact, I heard that an updated kinect is coming out very soon, though we’ll have to see what kind of improvements it really has. In addition to an updated kinect, I’d like to figure out if there’s a way to capture a photo with a decent DSLR and match up the data from the kinect with that to do be able to get some decent photos.

So after coming to these conclusions, I’ve decided that there were two directions I could move in. I could either continue exploring this project knowing that I have some pretty high technical fences to jump or I can take what I’ve learned so far and come up with a new direction to move in that’s more feasible. I’ve decided to pursue the second route, which entails using the kinect depth data in a way that could be visually cool and fun to play with, while also having the potential to be a nice little installation. This route also would include the use of a little openGL, which I’ve wanted to learn more about too.

More to come soon.

See Project 1 – January 2012

Project 1 – January 2012

As mentioned in my previous post, my plan for 2012 is to produce a side project every month. For January, I’m deciding to create something with the XBox Kinect. It’s something that I’ve wanted to do since I saw some of the amazing stuff that people have been doing all over the internet over the past year or so since the Kinect was released. I started reading the book Making Things See last month and have done a few tutorials and will use it as a starting point for this little project.

What I plan to do is use processing to create a program that takes a photo, removes the background and replaces it with another background. Pretty much what a green screen does, without the green screen. What is replaced and what happens with that photo is gonna be a surprise. If it works out the way I want it to, I’d like to develop it more and make it into some sort of installation that can be used at work or where ever anyone wants to set it up.

Here’s my breakdown of weekly goals:

Week 1: Research and feasability scoping

Week 2: Image capture and testing

Week 3: Image saving and uploading

Week 4: Clean up and blog post

So here we go…