Keynote Speakers of ACM MM 2017
Achin Bhowmik(SID Fellow)
Vice President, Intel, USA
Keynote Talk Title: TBA
Time: 9:15-10:15, Tuesday, Oct. 24, 2017
Dr. Achin Bhowmik is vice president and general manager of the perceptual computing group at Intel, where he leads the development and deployment of Intel® RealSense™ Technology. His responsibilities include creating and growing new businesses in the areas of interactive computing systems, immersive virtual reality devices, autonomous robots and unmanned aerial vehicles. Previously, he served as the chief of staff of the personal computing group, Intel’s largest business unit with over $30B revenues. Prior to that, he led the development of advanced video and display processing technologies for Intel’s computing products. His prior work includes liquid-crystal-on-silicon microdisplay technology and integrated electro-optical devices.
As an adjunct and guest professor, Dr. Bhowmik has advised graduate research and taught courses at the Liquid Crystal Institute of the Kent State University, Stanford University, University of California, Berkeley, Kyung Hee University, Seoul, and the Indian Institute of Technology, Gandhinagar. He has >100 publications including two books and >100 granted and pending patents. He is a Fellow of the Society for Information Display (SID), and serves on the board of directors for SID and OpenCV, the organization behind the open source computer vision library.
Bill Dally(NAE member, ACM/IEEE/AAAS Fellow)
Senior Vice President, NVidia, USA
Keynote Talk Title: TBA
Time: 13:00-14:00, Tuesday, Oct. 24, 2017
Bill Dally joined NVIDIA in January 2009 as chief scientist, after spending 12 years at Stanford University, where he was chairman of the computer science department. Dally and his Stanford team developed the system architecture, network architecture, signaling, routing and synchronization technology that is found in most large parallel computers today. Dally was previously at the Massachusetts Institute of Technology from 1986 to 1997, where he and his team built the J-Machine and the M-Machine, experimental parallel computer systems that pioneered the separation of mechanism from programming models and demonstrated very low overhead synchronization and communication mechanisms. From 1983 to 1986, he was at California Institute of Technology (CalTech), where he designed the MOSSIM Simulation Engine and the Torus Routing chip, which pioneered “wormhole” routing and virtual-channel flow control. He is a member of the National Academy of Engineering, a Fellow of the American Academy of Arts & Sciences, a Fellow of the IEEE and the ACM, and has received the IEEE Seymour Cray Award and the ACM Maurice Wilkes award. He has published over 200 papers, holds over 50 issued patents, and is an author of two textbooks. Dally received a bachelor's degree in Electrical Engineering from Virginia Tech, a master’s in Electrical Engineering from Stanford University and a Ph.D. in Computer Science from CalTech. He is a cofounder of Velio Communications and Stream Processors.
CTO & EVP, Samsung Electronics, Korea
Keynote Talk Title: TBA
Time: 9:15-10:15, Wednesday, Oct. 25, 2017
Dr. Injong Rhee is CTO and EVP of Software and Services, Mobile Communications at Samsung Electronics. Having joined Samsung in 2011, Rhee’s vision was to increase the core competency of Samsung’s software and services. This was soon realized through the launch of award-winning technologies, Samsung KNOX and Samsung Pay in 2013 and 2015 respectively.
Recognizing the growing importance of data security and privacy, Rhee led the development of KNOX, a proprietary security platform built into more than 200 million Samsung devices around the world. The product has received an extensive list of government security certifications, including accreditations from the USA, UK, France, Australia, and China. KNOX has also been named the most secure enterprise solution for mobile devices by Gartner.
Under Rhee’s leadership, Samsung Pay was launched in 2015, a product underpinned with three key principals; simplicity, security and accessibility. Samsung Pay is the only mobile payments provider to use Magnetic Secure Transmission (MST). MST is an advanced and innovative solution widely accepted around the world to enable customers to use Samsung Pay virtually anywhere. Within six months of Samsung Pay’s launch in Korea and the USA, the service garnered more than five million users and facilitated half a billion dollars of transactions. Rhee has recently led the expansion of Samsung Pay into Singapore and Spain, with plans to launch the service into a number of additional markets, including Australia and Brazil in 2016.
Before joining Samsung, Rhee was a tenured professor of Computer Science at North Carolina State University. He is a celebrated inventor of the BIC and CUBIC TCP congestion control algorithms, which is now the default TCP congestion control used in all Linux and Android devices around the world.
Rhee is also a recipient of the NSF Career Award and won the IEEE Communications Society William R. Bennett Prize in 2013 and 2016 for his work in human mobility modeling and wifi offloading respectively.
Edward Y. Chang(IEEE Fellow)
President, HTC, Taiwan
Keynote Talk Title: DeepQ: Advancing Healthcare Through AI and VR
Time: 13:00-14:00, Wednesday, Oct. 25, 2017
Quality, cost, and accessibility form an iron triangle that has prevented healthcare from achieving accelerated advancement in the last few decades. Improving any one of the three metrics may lead to degradation of the other two. However, thanks to recent breakthroughs in artificial intelligence (AI) and virtual reality (VR), this iron triangle can finally be shattered. In this talk, I will share the experience of developing DeepQ, an AI platform for AI-assisted diagnosis and VR-facilitated surgery. I will present three healthcare initiatives we have undertaken since 2012: Healthbox, Tricorder, and VR surgery, and explain how AI and VR play pivotal roles in improving diagnosis accuracy and treatment effectiveness. And more specifically, how we have dealt with not only big data analytics, but also small data learning, which is typical in the medical domain. The talk concludes with roadmaps and a list of open research issues in multimodal signal processing, fusion, and mining to achieve precision medicine and surgery.
Note: Our Healthbox (with Under Amour) and VR (VIVE and VivePaper) initiatives were awarded several top prizes at 2016/17 CES and MWC, whereas the Tricorder project was awarded 2nd place (out of 310) by the XPRIZE foundation with US$1,000,000.
Edward Chang currently serves as the President of Research and Healthcare (DeepQ) at HTC. Ed's most notable work is co-leading the DeepQ project (with Prof. CK Peng at Harvard), working with a team of physicians, scientists, and engineers to design and develop mobile wireless diagnostic instruments that can help consumers make their own reliable health diagnoses anywhere at anytime. The project entered the Tricorder XPRIZE competition in 2013 with 310 other entrants and was awarded second place in April 2017 with US$1M prize. DeepQ is powered by deep architecture to quest for cure. Similar deep architecture is also used to power Vivepaper, an AR product Ed's team launched in 2016 to support immersive reading experience (for education, training, and entertainment).
Prior to his HTC post, Ed was a director of Google Research for 6.5 years, leading research and development in several areas including scalable machine learning, indoor localization, social networking and search integration, and Web search (spam fighting). His contributions in parallel machine learning algorithms and big-data mining are recognized through several keynote invitations and the developed open-source codes have been collectively downloaded over 30,000 times. His work on indoor localization with project X was deployed via Google Maps (see XINX paper and ASIST/ACM SIGIR/ICADL keynotes). Ed's team also developed the Google Q&A system (codename Confucius), which was launched in over 60 countries.
Prior to Google, Ed was a full professor of Electrical Engineering at the University of California, Santa Barbara (UCSB). He joined UCSB in 1999 after receiving his PhD from Stanford University, and was tenured in 2003 and promoted to full professor in 2006. Ed has served on ACM (SIGMOD, KDD, MM, CIKM), VLDB, IEEE, WWW, and SIAM conference program committees, and co-chaired several conferences including MMM, MM, ICDE, WWW, and MOOC. He is a recipient of the NSF Career Award, IBM Faculty Partnership Award, and Google Innovation Award. He is a Fellow of IEEE for his contributions to scalable machine learning.
Vice President, Google, USA
Keynote Talk Title: Bringing a Billion Hours to Life
Time: 9:15-10:15, Thursday, Oct. 26, 2017
Scott is a VP of Engineering at Google, leading YouTube engineering. Previously he worked for 10 years in Ads, leading advertiser and publisher systems These include systems like AdWords, DoubleClick Bid Manager, DoubleClick For Publishers and AdSense. Scott joined Google in 2006. Prior to Google, Scott led the ordering system at Amazon.com for four Christmas holidays and three years. He also previous led the engineering team at i-drive, an Internet storage startup. Before that he worked at Connectix, Netscape and Apple.
Vice President, Unity Technologies, USA
Keynote Talk Title: Bringing Gaming, VR, and AR to Life with Deep Learning
Time: 13:00-14:00, Thursday, Oct. 26, 2017
Game development is a complex and labor-intensive effort. Game environments, storylines, and character behaviors are carefully crafted requiring graphics artists, storytellers, and software to work in unison. Often games end up with a delicate mix of hard-wired behavior in the form of traditional code and somewhat more responsive behavior in the form of large collections of rules. Over the last few years, data intensive Machine Learning solutions have obliterated rule-based systems in the enterprise - think Amazon, Netflix, and Uber. At Unity we have explored the use of Deep Learning in content creation and Deep Reinforcement Learning in character development. We will share our learnings and the Unity APIs we use with the audience and hopefully inspire content developers to start using these new technologies to create digital experiences that are out of this world.
Dr. Danny Lange is Vice President of AI and Machine Learning at Unity Technologies where he leads multiple initiatives in the field of applied Artificial Intelligence. Unity is the creator of a flexible and high-performance end-to-end development platform used to create rich interactive 2D, 3D, VR and AR experiences. Previously, Danny was Head of Machine Learning at Uber, where he led the efforts to build a highly scalable Machine Learning platform to support all parts of Uber’s business from the Uber App to self-driving cars. Before joining Uber, Danny was General Manager of Amazon Machine Learning providing internal teams with access to machine intelligence. He also launched an AWS product that offers Machine Learning as a Cloud Service to the public. Prior to Amazon, he was Principal Development Manager at Microsoft where he led a product team focused on large-scale Machine Learning for Big Data. Danny spent 8 years on Speech Recognition Systems, first as CTO of General Magic, Inc., and then as founder of his own company, Vocomo Software. During this time he was working on General Motor’s OnStar Virtual Advisor, one of the largest deployments of an intelligent personal assistant until Siri. Danny started his career as a Computer Scientist at IBM Research.
Danny holds MS and Ph.D. degrees in Computer Science from the Technical University of Denmark. He is a member of ACM and IEEE Computer Society and has numerous patents to his credit.