You Can Judge A Book By Its Sensors

Posted · Add Comment

Have you ever been lost in the story of a great book? Do you remember being completely immersed in the narrative of a far and away land powered by your imagination?

Next-Generation Learning Content

Great storytellers have always been able to use great prose to capture our minds and take us on delightful journeys. Over the years, our books have become even more engaging.

Whether through pop-up books, textured art, enhanced audio pages, or scratch-n-sniff photos, writers have been attempting to engage our senses more fully when reading. Now, the emergence of low-costs sensors in modern PC devices will enable reading to be deeply engaging for generations to come.

Our Seven Sensors

On many new PCs, smartphones, and tablet computing devices, sensors are being used to enhanced the user experience.  Sensors allow the computer and its applications to adapt to their current environment. For example, location sensors like global positioning systems (GPS) devices or wireless wide area network (WWAN) radios enable apps and gadgets to dynamically configure themselves for locally relevant content—e.g. presenting the local weather or adjusting your calendar to the current time zone after an airline flight.

For the most part, sensors are invisible. You never see them outside of your computer. They are embedded components that are integrated with your computer hardware and your operating system and applications detect and use them.

Video games, medical devices, and the military applications have used sensors extensively for decades. The Kinect for XBOX 360, digital blood pressure monitors, or weather radar are all examples of sensor technology applied across various industries. Now, Education has come to a time where content publishers and device makers can create rich experiences that go beyond digitized text and video on screen by incorporating sensors into their applications.

Sensors in WindowsWindows Sensor and Location Platform

Windows has been using sensors for a long time in mobile computing scenarios. In Windows 7, Microsoft introduced the Windows Sensor and Location Platform.  The platform provides device manufacturers a standardized way to expose sensor devices to developers and consumers. In turn, this allows publishers to build consistent experiences on a variety of Windows platforms without being locked into one device or a fragmented device ecosystem.

Windows provides a natural mechanism for publishers to take advantage of the rapid growth in sensors and making sense of sensor data for their apps and their consumers. The real power of sensors is revealed when multiple data from multiple sensor elements can be fused together to create far more powerful experiences.

New Learning Scenarios with Sensors

Imagine a student is assigned a video podcast tutorial as part of their homework assignment.  The sensors on their Windows PC or tablet can detect if the audio on, if there is a person in front of the screen, and use the built-in camera sensor for facial recognition to know if it is indeed the student that has been assigned the work actually doing it.

Reading is the “killer app” for sensors. Imagine using computer vision to track eye movement as the reader scans the page while reading aloud. The audible sensors and the vision sensors can detect where the student is struggling or building fluency.  The data log would automatically journal in the background not just the volume of content read but also tailored-insight to where his or her teacher can provide additional coaching for fluency.

Senses v. Sensors

According to brain researcher, Dr. John Medina, the more senses we use when recording a memory, the more elaborately that memory will be stored in our brains. This multi-sensory experience enables a more efficient and effective recall. In an academic context, if students can learn and demonstrate their competency in a sensory-rich environment, they are more likely to perform better than a traditional class context(Medina).

Publishers and device makers can use sensor technology to create sensory-rich learning environments. These environments can range from a total body immersion to simple onscreen manipulatives that respond to interactive elements in the physical world. We can use sensors in machines that respond appropriately to changing conditions and record experiences just like our own senses.

Modern Publishing with Sensors

Today, the most common digital books only shift content orientation when you rotate your device. Some content publishers are starting to incorporate user input through touch or speech, but without any data analytics for educator or student engagement. Sensor-based content publishing is nascent at best.

Very few publishers have moved away from their 20th Century perspectives on publishing to take full advantage of sensors.  Additionally, with so many personal computing platforms on the market today, choosing a strong and viable platform requires a lot more diligence than just publishing a digital form of their book content.

There is a false notion that digital content would be less costly and eco-friendly than their traditional paper-based counterparts. The production value for creating engaging content that takes full advantage of the medium requires more planning and development than traditional publishing methods. Moreover, consumer tablet devices that change platforms and capabilities every 6 to 14 months cause publishers to spend more time adapting to constantly shifting technology than their core competency of providing great content.

For school leaders and faculty authors, merely adding video or social network integration will be the low bar for content publishing. Next-generation learning content that comes alive based on sensors use will be the most meaningful transition in the modern publishing era.

Questions for the Modern Publishing Ecosystem

In the past, schools, universities, and education consumers could make decisions about content and technology independent of each other. I submit that is no longer the case for value creating decisions. In order to produce a self-enriching value chain, I have drafted some key decisions

  1. What physical methods can learners use to interact or influence the content beyond traditional mouse, pen, or touch inputs? Can they blow on it, speak to it, shine light on it, use mobile tags or 2D barcodes to interact with your content?
  2. What gaming mechanics are used in making the content more engaging or immersive? Is there telemetry available on those mechanics to inform instruction or provide real-time feedback for the learner?
  3. Can the content move and maintain the learner’s state across platforms/devices and adapt to use sensors based on the learner’s changing context?
  4. Finally, what is the life-cycle of content/platform refresh? Can the content be updated independent of the platform and vice-versa? Will content be available to run on previous generation platforms with full fidelity?

Add your questions, comments, agreements, and disagreements in the comments below and expand the conversation. I am always open to having my thinking challenged or deepened.  We will talk soon.

Resources

Microsoft Corporation, “Introducing the Windows Sensor and Location Platform” White Paper, August 23, 2010

Microsoft Corporation, “Windows Developer Preview – Windows 8 Guide” White Paper, 2011

Brain Rules, www.brainrules.com

Comments are closed.