Projects

1. Acoustic Edutainment Interface (AEDIN)

AEDIN is an auditory and touchscreen interface built to support education for blind and visually impaired students. AEDIN was developed at the School of Informatics at Indiana University-Purdue University Indianapolis (IUPUI), and tested at the Indiana School for the Blind and Visually Impaired (ISBVI). It functions as a knowledge base of aural educational content (recorded essays and quiz questions about these essays) navigated through touch input with audio output or feedback (audemes, as well as navigational feedback sounds).

 Audemes are short, non-speech sound symbols, under seven seconds, and are comprised of various combinations of sound effects referring to natural and/or artificial, man-made context, abstract sounds and even snippets of popular music.

The aim of designing and implementing AEDIN was to develop an application able to utilize audemes. In AEDIN, audemes are used as aural covers to anticipate large content information, such as TTS essays. In order to understand the notion of aural covers, an example can be seen in existing image and video sharing web sites. In image sharing sites (e.g., http://www.flickr.com), an image thumbnail is presented to anticipate the actual larger image that is shown when the thumbnail is clicked upon. Similarly, in video sharing sites (e.g., http://www.dailymotion.com), a thumbnail with rotating sample screenshots of the video content is presented to anticipate the video content. These techniques help users anticipate the content and make decisions before engaging it.

AEDIN – Acoustic Edutainment Interface from M Ferati on Vimeo.

2. Advanced Support and Creation-Oriented Library Tool for Audemes (ASCOLTA)

This chapter focuses on the design, implementation and initial evaluation of the Advanced Support and Creation-Oriented Library Tool for Audemes (ASCOLTA) (Italian: to listen) prototype. The ASCOLTA is an interactive application that enables individuals without an audio design background to create effective audemes. The aim of designing and implementing the ASCOLTA is three-fold.

First, to propose a design that offers an interface that guides users in the process of generating well-formed non-speech sounds (audemes). The created audemes then can be used in different scenarios, including being used as supplementary teaching materials. For example, teachers can utilize audemes to teach their lectures by playing them multiple times while they explain the subject material.

Second, to implement a design that operationalizes the guidelines derived empirically in the experiments reported in chapter 6, which will ensure the creation of well-formed audemes. The ASCOLTA will generate audemes by automatically concatenating sounds based on sound attributes, such as the type of sound (music or sound effect) and listening mode (causal, referential or reduced).

Third, to facilitate the process of generating well-formed audemes for a non-technical audience. While sound design has become a relatively easy task in the recent years, it still requires basic skills in regard to using commercially available sound design tools, such as Soundtrack Pro or Sound Forge. ASCOLTA facilitates this process by automatically generating well-formed audemes and offering to the users a simple interface to modify, edit and save the created audemes.

ASCOLTA from M Ferati on Vimeo.

3. Webtime

Webtime is an accessible website prototype targeted to high-school blind and low-vision students on the history of the World Wide Web. The website, intended to be a complementary resource for informal learning, includes the presentation of key historical characters, places, events, as well as landmark ideas and technologies. Sample content for Webtime has been reused from open access resources on the topic, and restructured into a non-trivial web architecture to allow investigators to experiment with the aural navigation patterns. Specifically, the information architecture of Webtime includes five types of topics (people, places, technologies, ideas, and news) and 50 topic instances. It also includes 36 list page instances and 18 types of hypertextual associations (83 instances) (e.g., places related to a technology and ideas related to a news story).
Webtime is optimized for Internet Explorer v8.0 and tested for accessibility with the W3C validator. To auralize content, the prototype has been optimized for Window-Eyes v7.5 screen reader. The website has been developed using PHP as scripting language and MySQL as database technology. The advanced back navigation strategies are dynamically generated based the matching between the recorded user’s backtracking history and the conceptual elements (topic or list) marked on the pages of the information architecture on the server side. In terms of input to control navigation on the user interface, Webtime supports dynamically generated link labels (e.g., GO BACK TO ) after number of topics the user traverses. Try it at: http://discern.uits.iu.edu:8670/NSF_WEB_TB/

webtime (1024x988)

Webtime Prototype

4. Green-savers Mobile

Green-savers Mobile (GSM) is a web-based mobile application prototype on energy saving tips and green products for the home. This sample mobile site was developed to demonstrate aural mobile navigation in a non-trivial, web information architecture. GSM includes 4 types of topics (tip, product, tax credit, and rebate), overall 65 topic instances, 43 list page instances, and 6 types (and 64 instances) of hypertextual associations (e.g. products related to a tip, tips related to a product, rebates available for a product).

GSM is optimized for touch-screen mobile devices, specifically Apple’s iPhone or iPod Touch, with iOS 4.1. To auralize content, GSM features dynamic, real-time text-to-speech (TTS) of page content and links. The custom TTS script, which is based on the API of the iSpeech service (www.ispeech.org), converts the text visible on the screen to audio, which is played back to the user in the same order in which it appears onscreen. The back navigation strategies dynamically populate the user’s backtracking history based on the current browsing sessions. In terms of input to control navigation, GSM supports explicit link labels (e.g. Go back to <List Page Name>), and, alternatively, custom touch gestures. Two-finger swipe left activates browse topic-based back; two-finger swipe up activates for list-based back. Try it at: http://discern.uits.iu.edu:8670/NSF_MOBILE/

GSM

Green-savers Mobile Prototype

Advertisements