Information quality speaks to a degree of excellence in communicating knowledge or intelligence and encompasses aspects of validity, accuracy, reliability, bias, transparency, and comprehensiveness among others. Professional news, public relations, and user generated content alike all have their own subtly different information quality issues. With so much recent growth in online video, it is also apparent that more and more consumers will be getting their information from online videos and that understanding the quality of video information becomes paramount for a consumer wanting to make decisions based on it.
This dissertation explores the design and evaluation of collaborative video annotation and presentation interfaces as motivated by the desire for better information quality in online media. We designed, built, and evaluated three systems: Videolyzer, Audio Puzzler, and Videolyzer CE which contribute both to interface methods for video annotation and to mechanisms for enhancing objective metadata such as transcripts as well as subjective notions of information quality of the video itself.
Videolyzer is a semi-structured manual analysis system for a video, its transcript, and its annotations which was designed to aid bloggers and journalists collect, aggregate, and share analyses of the information quality of a video. Its interface design and evaluation explored many questions of general interest to video annotation including: granularity, transcript integration, argumentation systems, and automation. The construction of Videolyzer also entailed adequately defining information quality and operationalizing it as a set of annotations available to jumpstart people's analyses. We evaluated Videolyzer in a laboratory study and found that users' awareness and understanding of comprehensiveness, multiple perspectives, context, and quality of the video were enhanced.
One component of the evaluation of the Videolyzer interface was the effect that the inclusion of a time synchronized transcript would have on the user experience. This in turn motivated the need for a high quality time-stamped transcript of the video that would allow for interactions with the transcript to be mirrored on the video timeline. Because the accuracy of automatically produced transcriptions of video is generally poor under real-world conditions we developed Audio Puzzler, a game which as a by-product of play produces time-stamped transcripts of videos. We show that Audio Puzzler is an engaging game and further demonstrate a high accuracy of transcript metadata by leveraging an aggregation algorithm to merge many people's independently produced output.
Finally, once we had built an integrated system for analyzing videos with transcripts, we asked the question whether and to what degree the credibility of video information could be modulated for end consumers by these collected annotations. Our goal was to syndicate the knowledge collected using Videolyzer to a class of users that was less engaged, but that would still find benefit in having the additional annotation information. To do this we built and evaluated the credibility impact of simplified visualizations which showed annotation activity, polarity, and sources packaged into an online video player. Our evaluation showed that these graphics could influence people's perceptions of the credibility of a video with stronger effects exhibited for more engaged people.