Andres Navarro (M’95–SM’11) got his Electronic Engineer (1993), and Master on Technology Management (1999), both from Universidad Pontificia Bolivariana in Medellín. He got his PhD in Telecommunications from Universitat Politècnica de Valencia (2003). He is an IEEE Senior Member, former Advisor of the National Innovation Program on Electronics, Telecommunications and Informatics from Colombian Research, Development and Future Projects system. He is also advisor of the Spectrum Management Committee for Colombian Spectrum Agency. Since 1999, he has served as Director of the i2t research group at Universidad Icesi. His research interests are Spectrum Management, radio propagation and m-health. Currently, he is the Chairman of the Colombian chapter of IEEE Communications Society.
Topic: From Spectrum Visualization to Urban Computing in Cali, Colombia
Abstract: During some years, a group of Colombian Universities in Cali, has been working in Mobile Computing projects, in cooperation through the i2ComM initiative and lately, some of this cooperation was extended to China, with student’s exchange and professor’s mobility, as well as different cooperation actions with joint publications as a result. As part of this cooperation, several students have traveled to China, and we are willing to extend this cooperation far from the current point, to expand our activities and achievements. In this presentation we will talk about some of the activities and projects we are executing in Colombia, as part of the Consortium, including things like the use of Game Engines (JMonkey and Unity) and Virtual Reality for Radio Channel simulation for 5G, as well as Telecommunications training and Spectrum Management learning using VR tools. In second place, we will talk about some Urban Computing initiatives developed jointly between Colombian and Chinese Universities, including the use of ad-hoc large-scale sensor network and big data, which aims to collect any kind of urban or social data and use visualization tools.
Liang Lin is the Executive Director of SenseTime Research and a full Professor of Sun Yat-sen University. He currently leads the SenseTime R&D teams to develop cutting-edges and deliverable solutions on computer vision, data analysis and mining, and intelligent robotic systems. He has authorized and co-authorized on more than 100 papers in top-tier academic journals and conferences (e.g., 15 papers in TPAMI/IJCV). He has been serving as an Associate Editor of IEEE Trans. Human-Machine Systems. He served as an Area Chair for numerous conferences such as CVPR, ICME, ACCV, ICMR. He was the recipient of Best Paper Dimond Award in IEEE ICME 2017, Best Paper Runners-Up Award in ACM NPAR 2010, Google Faculty Award in 2012, Best Student Paper Award in IEEE ICME 2014, and Hong Kong Scholars Award in 2014. He is a Fellow of IET.
Topic: Depth Learning---When Depth Estimation Meets Deep Learning
Abstract: Depth data is indispensable for reconstructing or understanding 3D scenes. It serves as a key ingredient for applications such as synthetic defocus, autonomous driving, and augmented reality. Although active 3D sensors (e.g., Lidar, ToF, and structured-light 3D scanner) can be employed, retrieving depth from monocular/stereo cameras is typically a more cost-effective approach. However, estimating depth from images is inherently under-determined, to regularize the problem, one typically needs handcrafted models characterizing the properties of depth data or scene geometry. As the recent advances in deep learning, depth estimation is cast as a learning task, leading to state-of-the-art performance. In this talk, I will present our new progress on depth estimation with convolutional neural networks (CNN). Particularly, I will first introduce cascade residual learning (CRL), our two-stage deep architecture on stereo matching producing high-quality disparity estimates. Observations with CRL inspires us to propose a domain-adaptation approach---zoom and learn (ZOLE)---for training a deep stereo matching algorithm without the ground-truth data of a target domain. By combining a view synthesis network and the first stage of CRL, we propose single view stereo matching (SVS) for single image depth estimation, with a performance superior to the classic stereo block matching method taking two images as inputs.
Ming C. Lin is currently the Elizabeth Stevinson Iribe Chair of Computer Science at the University of Maryland College Park and John R. & Louise S. Parker Distinguished Professor Emerita of Computer Science at the University of North Carolina (UNC), Chapel Hill. She is also an honorary Chair Professor (Yangtze Scholar) at Tsinghua University in China. She obtained her B.S., M.S., and Ph.D. in Electrical Engineering and Computer Science from the University of California, Berkeley. She received several honors and awards, including the NSF Young Faculty Career Award in 1995, Honda Research Initiation Award in 1997, UNC/IBM Junior Faculty Development Award in 1999, UNC Hettleman Award for Scholarly Achievements in 2003, Beverly W. Long Distinguished Professorship 2007-2010, Carolina Women’s Center Faculty Scholar in 2008, UNC WOWS Scholar 2009-2011, IEEE VGTC Virtual Reality Technical Achievement Award in 2010, and many best paper awards at international conferences. She is a Fellow of ACM, IEEE, and Eurographics.
Her research interests include computational robotics, haptics, physically-based modeling, virtual reality, sound rendering, and geometric computing. She has (co-)authored more than 300 refereed publications in these areas and co-edited/authored four books. She has served on hundreds of program committees of leading conferences and co-chaired dozens of international conferences and workshops. She is currently a member of Computing Research Association-Women (CRA-W) Board of Directors, Chair of IEEE Computer Society (CS) Fellows Committee, Chair of IEEE CS Computer Pioneer Award, and Chair of ACM SIGGRAPH Outstanding Doctoral Dissertation Award. She is a former member of IEEE CS Board of Governors, a former Editor-in-Chief of IEEE Transactions on Visualization and Computer Graphics (2011-2014), a former Chair of IEEE CS Transactions Operations Committee, and a member of several editorial boards. She also has served on several steering committees and advisory boards of international conferences, as well as government and industrial technical advisory committees.
Topic: Reconstructing Reality: From Physical World to Virtual Environments
Abstract: With increasing availability of data in various forms from images, audio, video, 3D models, motion capture, simulation results, to satellite imagery, representative samples of the various phenomena constituting the world around us bring new opportunities and research challenges. Such availability of data has led to recent advances in data-driven modeling. However, most of the existing example-based synthesis methods offer empirical models and data reconstruction that may not provide an insightful understanding of the underlying process or may be limited to a subset of observations.
In this talk, I present recent advances that integrate classical model-based methods and statistical learning techniques to tackle challenging problems that have not been previously addressed. These include flow reconstruction for traffic visualization, learning heterogeneous crowd behaviors from video, simultaneous estimation of deformation and elasticity parameters from images and video, and example-based multimodal display for VR systems. These approaches offer new insights for understanding complex collective behaviors, developing better models for complex dynamical systems from captured data, delivering more effective medical diagnosis and treatment, as well as cyber-manufacturing of customized apparel. I conclude by discussing some possible future directions and challenges.
University of Rennes 1
Marc Christie is an associate professor at University of Rennes 1. His research is focused on virtual cinematography which is the application of real cinematography techniques to virtual 3D environments. The research covers a wide range of challenges like extracting data from real-movies, learning elements of film style (types of transitions, continuity between shots, editing patterns), proposing models and techniques to re-apply the learnt elements to virtual contents, computing camera angles and trajectories as well as optimal edits. Recently Marc focused his research on how these models and techniques can be transferred to drones, opening the topic of cinematographic drones. He co-authored 40+ conference papers on these topic, and led courses at Eurographics and Siggraph Asia.
Topic: VR content creation for movie previsualisation
Abstract: Creatives in animation and film productions have forever been exploring the use of new means to visually design filmic sequences before realizing them in studios through ranges of techniques: hand-drawn storyboards, physical mockups or more recently virtual 3D environments (called previsualisation). A central issue in using virtual 3D environments to rehearse a sequence is the complexity of content creation tools that are not accessible to creatives such as film directors, directors of photo or lighting designers. In this talk, we take the path of using VR, not as an experiential exploration tools in virtual environments, but as an authoring system which enables the crafting of filmic sequences even for creative people who are not experts with 3D tools. The proposed system is designed to reflect the traditional creative process through (i) the creation of storyboards using VR, and (ii) the creation of animated filmic sequences using VR (design the scene, place the cameras, perform a montage between the cameras). As a benefit, the proposed approach enables a novel and seamless back-and-forth between all stages of the process. A user evaluation with students from film schools, experts and non-experts reports the benefits of such a system in prototyping animated sequences for movie storyboarding and rehearsal compared to traditional tools, and demonstrates strengths such as (i) ease of use, (ii) spatialization that reduces manipulations, and (iii) seamless back and forth between stages. The tool is currently under evaluation in film schools, previsualisation companies, and feature animation film companies.
Vice President Developer Ecosystems, NVIDIA. President at Khronos Group. At NVIDIA Neil works to enable applications leverage advanced silicon acceleration. Neil is also the elected President of the Khronos Group where he has helped initiate and evolve APIs and formats such as Vulkan, OpenXR, OpenGL ES, WebGL, glTF, OpenCL, OpenVX and NNEF.
Topic: Open Standards for Building Virtual and Augmented Realities
Abstract: For VR and AR to become truly pervasive, native applications and the Web need to be enabled with portable and open standards for 3D, vision and inferencing acceleration, efficient formats for delivering 3D assets, and cross platform APIs for user interaction and scene analysis and interaction. The Khronos Group is working alongside other international standards organizations to create the building blocks for XR-enabled browsers and applications. This presentation will provide an update on the very latest developments in Khronos standards, and how they fit within the larger industry XR ecosystem.
Professor of Beihang University (BUAA), member of the China Academy of Engineering (CAE), Chief Scientist of State Key Laboratory of Virtual Reality Technology and System, President of China Simulation Federation (CSF).
Professor Zhao has conducted virtual reality and artificial intelligence research for many years, and accomplished more than 20 national science and technology programs, including National Natural Science Foundation, National High-tech R&D Program and the National Basic Research Program of China. As the major completer, he was awarded National Prize for Progress in Science and Technology Grade One once and Grade TWO twice, National Prize for Technical Innovation Grade TWO once. By now, he has published 3 academic monographs, more than 180 papers and 60 national authorized patents.
Topic: Promote the IQ of Computer System
Abstract: The IQ of computer system means the humanlike intelligence of its hardware and software, and the humanlike thinking from its designer and producer. According to human logical thinking, we analyzed and graded the humanlike thinking ability of computer system, and make further efforts to promote its humanlike possibility.
The University of Hong Kong
Wenping Wang is Chair Professor of Computer Science at the University of Hong Kong. His research interests cover computer graphics, computer visualization, computer vision, robotics, medical image processing, and geometric computing, and he has published over 140 journal papers in these fields. He is journal associate editor of several international journals, including Computer Aided Geometric Design (CAGD), Computer Graphics Forum (CGF), IEEE Transactions on Computers, and IEEE Computer Graphics and Applications, and has chaired a number of international conferences, including Pacific Graphics 2012, ACM Symposium on Physical and Solid Modeling (SPM) 2013, and SIGGRAPH Asia 2013. Prof. Wang received the John Gregory Memorial Award for his contributions in geometric modeling. He is an IEEE Fellow.
Topic: On Reconstructing 3D Wire Objects
Abstract: 3D shape reconstruction has widespread applications in computer graphics, computer vision, robotics and virtual reality. However, the reconstruction of 3D wire objects has received relatively little research attention, despite the ubiquity of these thin objects, such as ropes, cables, tree branches, wire arts and wire-frame furniture. In this talk I will present our recent works on an image-based reconstruction method and on using a hand-held commodity RGBD sensor for scanning and reconstructing wire objects with a skeleton-based fusion approach. I will also discuss a range of outstanding challenges that need to be addressed in order to achieve reliable and real-time reconstruction of wire objects in the wild.
Kochi University of Technology
Xiangshi Ren is a professor in the School of Information and director of the Center for Human-Engaged Computing (CHEC) at Kochi University of Technology. He is founding president and honorary life-time president of the International Chinese Association of Computer-Human Interaction (ICACHI). He was named one of the Asian Human-Computer Interaction Heroes in ACM CHI 2015. He was a visiting professor at the University of Toronto, visiting faculty researcher at IBM Research (Almaden), and visiting/guest/chair professor at several universities in China. Currently, he is an adjunct professor at Jilin University, Beijing Normal University. He is a Senior Member of the ACM and of the IEEE.
Prof. Ren has been working on fundamental studies in the field of Human-Computer Interaction (HCI) for over twenty-five years. His research interests include all aspects of human-computer interaction, particularly human performance models, pen-based interaction, multi-touch interaction, eye-based interaction, haptic interaction, gesture input, game interaction, user interfaces for older users and for blind users. He and his colleagues have established a unique research framework based on information technology, incorporating methodologies such as human performance modeling, developing new algorithms, conducting user studies, and systematically testing and applying HCI theory to applications.
Topic: From Human-Computer Interaction to Human-Engaged Computing
Abstract: This talk is in three parts: 1) First I will review the history of Human-Computer Interaction (HCI), discuss the future relationship between humans and computers, and describe a new overarching perspective for development - Human-Engaged Computing (HEC) for the next generation of human-computer interaction. 2) Secondly, I will give a summary of my HCI studies from the past 25 years. 3) Then finally I will share some valuable principles that I did learn through my past experience in HCI studies.
Texas A&M University
Jinxiang Chai is currently the founder and CEO of Xmov.ai, which develops the world’s first scalable end-to-end solution for high-fidelity performance based animation for human characters. He is also a tenured professor in the Department of Computer Science and Engineering at Texas A&M University. He received his Ph.D in Robotics from the School of Computer Science, Carnegie Mellon University in 2006. His primary research is in the area of computer graphics and vision with a focus on human motion capture, analysis, synthesis, simulation and control. He is extremely interested in developing realtime human motion capture technologies for animation 2.0 and natural user interfaces for next generation computing platforms, such as smart TVs, AR/VR, and service robots. He has published 20 SIGGRAPH/TOG papers on human motion analysis, synthesis, capture and control. He received an NSF CAREER award for his work on theory and practice of Bayesian human motion synthesis.
Topic: Human Motion Capture: Applications, Challenges and Progress
Abstract: Motion capture technologies have made revolutionary progress in computer animation in the past decade.
With the detailed motion data and editing algorithms, we can directly transfer expressive performance of a real person to a virtual character,
interpolate existing data to produce new sequences, or compose simple motion clips to create a rich repertoire of motor skills. In addition to computer animation applications,
motion capture technologies have enabled natural user interactions for computers, smart phones, game consoles, smart TV, VR/AR and service robots, as well as human motion recognition
for video analysis and intelligent security monitoring.
Current motion capture technologies are often restrictive, cumbersome, and expensive. Video-based motion capture offers an appealing solution because they require no markers, no sensors, or no special suits and thereby do not impede the subject’s ability to perform the motion. Graphics and vision researchers have been actively exploring the problem of video-based motion capture for many years, and have made great advances. However, these results are often vulnerable to ambiguities in video data (e.g., occlusions), degeneracies in camera motion, and a lack of discernible features on a human body/hand.
In this talk, I will describe our recent efforts on acquiring human motion using RGB/RGBD cameras. Notable examples include full-body motion capture using a single depth camera, realtime and automatic 3D facial performance capture with eye gaze using a single RGB camera, realtime hand gesture capture using a single depth camera, and acquiring physically realistic hand grasping and manipulation data and physically accurate human motion using multiple cameras. I will also talk about applications of human motion capture in natural user interaction and character animation.