Since I'm still on winter break, I took the better part of today to mess around with my new (well, secondhand) Kinect. I used the excellent freenect library and its Python bindings to whip up this simple little demo.
There's a lot here that is very similar to how we work in SMALLab, especially how we deal with a "buttonless" interface. Using the Z-axis (forward and back on the Kinect, up and down in SMALLab) makes for a pretty nice way of changing from active to inactive state.
I'm looking forward to playing with this some more in the future. It's not the most response system in the world, but I can see simple and casual uses for this kind of interaction in schools, public displays, etc.
The internet is a potentially powerful tool for enhancing learning environments. However, the unstructured nature of it can present a challenge to both students and teachers. For example, students may lack the ability to pursue a learning goal strategically and therefore find themselves overwhelmed by the amount of information at their fingertips and wandering aimlessly. Teachers, on the other hand, may be at a loss of how to assess students' learning in such a setting and unable to understand the connections that students' are making.
Reflecting Pool (see attached interactive PDF) aims to improve internet-based activities for students by increasing metacognitive awareness (i.e., planning, monitoring, and control of thinking) and for teachers by giving them greater access to the thought processes of their students. Furthermore, it offers a unique opportunity for all to understand and explore the collective knowledge of the class.
The positions of the glowballs, and other information coming from SCREM, can be relayed to a little Nokia N800 palmtop over the WiFi connection.
Tracking is still a little rough, and, as I mention in the video, I think the IR cams are grabbing the N800 a little bit, so we'll need to keep it outside of the area--at least until the tracker is a bit more solid. Still, it should be a handy way to bring in another interface--and other participants--inside SMALLab. In fact, I gave the N800 to James in the office, and he could watch the movements of the balls through the wall.
I'm sure that's useful somehow.
Here is a quick video explanation of the HitArea and DragArea render engines.
The engines can track when a pointer enters and leaves them, and the DragArea engine also reports back the pointer's relative position--handy for sliders, etc.
We were experimenting with different ideas for how to represent the third dimension you have access to with the SMALLab installation. This is a trial using a camera viewpoint that tracks the position of the ball as you move through a space with it. It fools you into thinking that a 3d world is changing at your feet.
It's not a perfect solution. It only works for one person, and there are limits to how high you can pretend the Z-axis goes. Still, it's an interesting way of looking at the problem, and it might open more doors later on.
This is great for the project. One of the points I drove home during my presentation was that I wanted the Pleech to be as much about the process of building one and sharing improvements with people as it is the end product. Hopefully wider public exposure will help bring more people into this process and start a good debate on the best way to build things like this.
It is finished:
Well, actually, it's only just now getting started, really. The instructable is part of a hopefully ongoing process for refining the Pleech concept by harnessing the power of the instructables community. It's only been up for a few hours and already has a fan! Woowee!
Copyright Mike Edwards 2006-2009. All content available under the Creative Commons Attribution ShareAlike license, unless otherwise noted.