AutonoMUN 2022 will be a half-day workshop where researchers and practitioners interested in robotics and autonomous systems will gather to share their interests and establish contacts. The keynote talk will be given by Dr. Chris Clark of Harvey Mudd College, California.
Please indicate if you plan on attending by filling out the form below. Note that registration is free!
Time | Speaker | Title |
---|---|---|
1:00 - 1:05 | Andrew Vardy (MUN) | Opening Remarks |
1:05 - 1:45 | Chris Clark (Harvey Mudd College) | Robots Crossing Boundaries |
1:45 - 2:05 | Neil Bose’s Group (MUN) | The AOSCENT lab: Oil Spill Delineation using AUVs |
2:05 - 2:25 | Kevin Murrant (NRC) | Autonomous Marine Navigation at NRC |
2:25 - 2:45 | David Shea (Kraken Robotics) | TBD |
2:45 - 3:00 | COFFEE BREAK | |
3:00 - 3:20 | Oscar De Silva (MUN) | AI Multi-Sensor Navigation for VTOL Aircrafts |
3:20 - 3:40 | Ting Zou (MUN) | Development of Biologically-inspired Robots |
3:40 - 4:00 | Vinicius Prado da Fonseca (MUN) | Using Machine Learning Models to Extract Object Information from Visual and Tactile Data during Robotic Manipulation |
4:00 - 4:20 | Andrew Vardy (MUN) | The Swarm in the Labyrinth |
4:20 - 4:25 | Andrew Vardy (MUN) | Closing Remarks |
Title: Robots Crossing Boundaries
Abstract: Over the last 50 years, autonomous robots have made the leap from being novel research contributions in labs at universities to becoming the fundamental technology upon which companies are built. Moreover, the recent developments in robot technology have been leveraged as tools, enabling new scientific sampling that could never be realized before. With these new developments, robotic applications have made impacts in oceanography, geology, archaeology, biomechanics and biology. The speaker will discuss three key interdisciplinary projects that Harvey Mudd researchers have contributed to: shark tracking, shipwreck search, and education. These projects not only showcase several technical aspects of traditional robotics including motion planning, state estimation, systems integration, and control theory, but also highlight the impact of interdisciplinary research and education.
Bio: Christopher Clark is a research scientist at Apple and Harvey Mudd College where he has also been a Professor for over 10 years. Clark is a Fulbright Scholar and for the 2011–2012 academic year, he held the William R. Kenan, Jr. Visiting Professorship for Distinguished Teaching at Princeton University. He has also been a professor at the University of Waterloo and California Polytechnic State University, San Luis Obispo. In 2004, he was a first hire at the startup company Kiva Systems (now Amazon Robotics), which changed warehouse management via multi-robot systems. He earned his undergraduate degree in engineering physics from Queen’s University, Canada, a master’s in mechanical engineering from the University of Toronto and a PhD in aeronautics and astronautics with a minor in computer science from Stanford University. Clark’s research areas include multi-robot systems, underwater robot systems, applied control theory, intelligent vehicles, state estimation and motion planning.
Title: “Autonomous Marine Navigation at NRC”
Abstract: This presentation will give an overview of work ongoing at NRC related to autonomous navigation via several projects. These include navigation in ice-covered waters, defining collision boundaries for autonomous marine traffic, and ice management with multiple autonomous vessels. A brief overview of NRC testing facilities related to marine navigation is included.
Faculty/department affiliation: Intelligent systems Lab, Memorial university of Newfoundland.
Title: AI Multi-Sensor Navigation for VTOL Aircrafts
Abstract:
Vertical take-off and landing (VTOL) aircraft are often used for transporting goods to and from inaccessible areas, surveillance, ground support, etc. which can be enhanced and de-risked by increasing the autonomous capabilities of these vehicles. As part of the NRC Artificial Intelligence (AI) for logistics supercluster support program, the intelligent systems lab of Memorial University of Newfoundland in collaboration with the NRC flight research lab, are developing an AI-powered multi-sensor navigation module, which can be integrated into VTOL vehicles in real-time to improve its autonomous capabilities and to reduce pilot workload. The system requires a combination of LiDAR, vision, and IMU data for generating map and location information. The particular sensor suite enables autonomous operation independent of GPS availability for safety-critical take-off and approach portions of a VTOL mission. The overall project realizes a state-of-the-art visual lidar inertial navigation solution and proposes an AI-based scan matcher, a loop closure module, and a mode identification module to improve the robustness and the overall accuracy of the navigation pipeline. This talk highlights the current development of the proposed visual Lidar inertial navigation pipeline with results on AI-powered place recognition and point cloud segmentation modules of the system. Additionally, the talk will include the hardware design of the navigation module and validation with a DJI M600 drone and the Bell 412 advanced system research aircraft of the proposed architecture.
Title: Development of biologically-inspired robots
Abstract: The development of biologically-inspired robots is a new research trend in the realm of robotics, by virtue of the astonishing performance showcased by animals as a result of millions of years evolution. Our research focuses on developing robust, low cost and sophisticated bio-inspired robots, including bio-inspired aerial robot and underwater robot. We have achieved significant progress, e.g., our bat-inspired robots have showcased a very good agreement with their biological counterparts in wing morphology. Bio-inspired robots with intelligent module for fully autonomous operation will be the future research direction.
Title: “Using machine learning models to extract object information from visual and tactile data during robotic manipulation”
Abstract: Autonomous robotic assistants and the next generation of semi-autonomous prostheses will focus on making object manipulation more predictable and reliable. Achieving these goals requires novel manipulation methods while obtaining and processing accurate object information. This talk will show recent investigations using machine learning models and methodologies to extract hand and object characteristics from visual and tactile data. Our current work focuses on several topics, such as improving human grasp classification and gesture recognition, including developing new visual-tactile data representations. We will present developments in haptic feedback, robot hand grasp stability, automatic texture recognition, and surface reconstruction. This presentation will show the novel approaches for using machine learning models on visual-tactile data and their applications for robotic hand control and biomedical engineering.