Keynotes

Tuesday October 27 Keynote

9 – 10.30am

Location: Auditorium

Harnessing Big Personal Data, with Scrutable User Modelling for Privacy and Control

Speaker: Judy Kay (The University of Sydney, Australia)

Chair: Alan Smeaton (Dublin City University, Ireland)

Abstract: My work aims to enable people to harness, control and manage their big personal data. This is challenging because people are generating vast, and growing, collections of personal data. That data is captured by rich personal digital ecosystems of devices, some worn or carried, and others are fixed or embedded in the environment. Users explicitly store some data but systems also capture the user’s digital footprints, ranging from simple clicks and touches, to images, audio and video. This personal data resides in a quite bewildering range of places, from personal devices to cloud stores, in multitudes of silos. Big personal data differs from the scientific big data in important ways. Because it is personal, it should be handled in ways that enable people to ensure it is managed and used as they wish. It may be of modest size compared with scientific big data, but people consider their data stores as big, because they are complex and hard to manage. A driving goal for my research has been to tackle the challenges of big personal data by creating infrastructures, representations and interfaces that enable a user to scrutinize and control their personal data in a scrutable user model. One important role for users models is personalisation, where the user model is a dynamic set of evidence-based beliefs about the user. This is the foundation for personalization, ranging from recommenders to teaching systems. User models may represent anything from the user’s attributes to their knowledge, beliefs, goals, plans and preferences.

Bio: Judy Kay is Professor of Computer Science. She leads the Human Centred Technology Research Cluster, one of three priority clusters in the Faculty of Engineering and IT at the University of Sydney. Her own lab, CHAI, Computer Human Adapted Interaction Research Group aims to create new technologies for human computer interaction (HCI). Her personalisation research has created the Personis user modelling framework. This is a unified mechanism for keeping and managing people’s long term personal data from diverse sources. This is the foundation for building personalised systems. Personis models are distinctive in that they were designed to be scrutable, because interfaces enable the user to scrutinise their user model and personalisation processes based on it. In learning contexts, she has created interfaces for Open Learner Models that make this personal data available in useful forms for long term learning and self-monitoring.

Her interface research has created the Cruiser Natural User Interaction (NIU) software framework. This provides new ways for people to make use of large interactive tabletops and wall displays. By mining the digital footprints of such interaction, this research is creating new ways for people to learn to collaborate, and to learn and work more collaboratively.

She has extensive publications, in venues such as the conferences, Pervasive, Computer Human Interaction (CHI), User Modeling (UM, AH, UMAP) and journals, such as IEEE Transactions on Knowledge and Data Engineering, International Journal of Artificial Intelligence in Education, User Modeling and User-Adapted Interaction, Personal and Ubiquitous Computing, Communications of the ACM, Computer Science Education. Invited keynote addresses include: UM’94 User Modeling Conference, Boston, USA; IJCAI’95 International Joint Conference on Artificial Intelligence, Montreal, Canada; ICCE’97, International Conference on Computers in Education, Kuching, Malaysia; ITS’2000, Intelligent Tutoring Systems, Montreal, Canada; AH2006 Adaptive Hypermedia and Adaptive Web-Based Systems, Dublin, Ireland; ITS’2008, Intelligent Tutoring Systems, Montreal, Canada; EC-TEL’2010, European Conference on Technology Enhanced Learning, Barcelona, Spain, C5’2012, International Conference on Creating, Connecting and Collaborating through Computing, Playa Vista; ICLS’12, International Conference of the Learning Sciences, Sydney, LASI’13, Learning Analytics Summer Institute, co-organized by the Society for Learning Analytics Research (SoLAR) and Stanford University.

Thursday October 29 Keynote

9 – 10.30am

Location: Auditorium

Vision-enhanced Immersive Interaction and Remote Collaboration with Large Touch Displays

Speaker: Zhengyou Zhang (Microsoft Research, USA)

Chair: Xiaofang Zhou (The University of Queensland, Australia)

Abstract: Large displays are becoming commodity, and more and more, they are touch-enabled. In this keynote, we describe a system called ViiBoard (Vision-enhanced Immersive Interaction with touch Board) that enables natural interaction and immersive remote collaboration with large touch displays by adding a commodity color plus depth sensor. It consists of two parts.

The first part is called VTouch that augments touch input with visual understanding of the user to improve interaction with a large touch-sensitive display such as Microsoft Surface Hub. An RGBD sensor such as Microsoft Kinect adds the visual modality and enables new interactions beyond touch. Through visual analysis, the system understands where the user is, who the user is, and what the user is doing even before the user touches the display. Such information is used to enhance interaction in multiple ways. For example, a user can use simple gestures to bring up menu items such as color palette and soft keyboard; menu items can be shown where the user is and can follow the user; hovering can show information to the user before the user commits to touch; the user can perform different functions (for example writing and erasing) with different hands; and the user’s preference profile can be maintained, distinct from other users. User studies are conducted and the users very much appreciate the value of these and other enhanced interactions.

The second part is called ImmerseBoard. ImmerseBoard is a system for remote collaboration through a digital whiteboard that gives participants a 3D immersive experience, enabled by an RGBD sensor mounted on the side of a large touch display (the same setup as in VTouch). Using 3D processing of the depth images, life-sized rendering, and novel visualizations, ImmerseBoard emulates writing side-by-side on a physical whiteboard, or alternatively on a mirror. User studies involving three tasks show that compared to standard video conferencing with a digital whiteboard, ImmerseBoard provides participants with a quantitatively better ability to estimate their remote partners’ eye gaze direction, gesture direction, intention, and level of agreement. Moreover, these quantitative capabilities translate qualitatively into a heightened sense of being together and a more enjoyable experience. ImmerseBoard’s form factor is suitable for practical and easy installation in homes and offices.

Bio: Zhengyou Zhang is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) (2005, for contributions to robust computer vision techniques) and a Fellow of the Association of Computing Machinery (ACM) (2013, for contributions to computer vision and multimedia). He is the Founding Editor-in-Chief of the newly established IEEE Transactions on Autonomous Mental Development (IEEE T-AMD), and is on the Editorial Board of the International Journal of Computer Vision (IJCV), the Machine Vision and Applications, and the Journal of Computer Science and Technology (JCST). He was on the Editorial Board of the IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE T-PAMI) from 1999 to 2005, the IEEE Transactions on Multimedia (IEEE T-MM) from 2004 to 2009, the International Journal of Pattern Recognition and Artificial Intelligence (IJPRAI) from 1997 to 2008, among others. He is listed in Who’s Who in the World,  Who’s Who in America and Who’s Who in Science and Engineering.

Before joining Microsoft, Zhengyou worked at INRIA (French National Institute for Research in Computer Science and Control) for 11 years, and was a Senior Research Scientist since 1991, where he worked in the Computer Vision and Robotics group. In 1996-1997, he spent one-year sabbatical as an Invited Researcher at the Advanced Telecommunications Research Institute International (ATR), Kyoto, Japan.

He holds more than 100 US patents and has about 20 patents pending. He also holds a few Japanese patents for his inventions during his sabbatical at ATR. He has published over 200 papers in refereed international journals and conferences, and is the author of the following books