Students are increasingly using digital media in their daily lives. Obsessively playing online games, sharing music, interacting with peers through Social Networks and Instant Messaging are a few examples of how their modes of communication and information have drastically changed. Universities are however ignoring and resisting this transformation. This project identifies a new tangible interface which can bring a change in the way universities can leverage from the rapidly advancing technical contexts of learning.Teaching with Things leverages Harvard University's unique archival, library, and museum collections in the pursuit of a flexible, scalable approach to representing the material and sensory attributes of three dimensional objects, to building “artifactual interfaces,” to annotating three-dimensional objects, and to exploring relationships among objects and multimedia data sets When resources for teaching are identified & specified, this interface will present technology-based, interactive teaching modules that integrate existing media with student-produced images and 3D models. Essential to this approach is the use of the Kinect Sensor in student-driven exercises to capture, clean up, and manipulate quick 3D scans of objects. Teaching with things will develop a rough-and-ready assemblage of existing and emerging technology within which students will learn to situate and transform images and models. With robust 3D models students will reflect on the challenging objects present to digital & material comprehension, and will learn to combine these digital models with text, images and sound to create multimedia analysis of teachable objects.

A number of course were taught using the "Teaching with Things" platforom during the 2012-2013 academic year at Harvard University. These include Romance Studies 220, Fragments of a Material History of Literature, taught by Jeffrey Schnapp and Matthew Battles in fall 2012.

The working prototype for this project was developed as a part of the Google Summer of Code 2012 Program under the title: Interface 3D Model Inputs via Kinect to Zeega

Zeega developed by metaLAB is a platform for coordinating, organizing, and telling stories using various digital media. It is an open source project built chiefly in HTML5 which gives users a lot of intuitive editorial functionality to combine and manipulate data.