Adaptive Logic Neural Networks for Autonomous Robot Navigation
This is the updated mirror of http://www.gorodnichy.ca/archives/phd/WorldModeling/.
Ph.D. Thesis
In speciality Computer Vision and Artificial Intelligence.
Department of Computing Science, University of Alberta, Edmonton, Canada. April 2000.
"Vision-based World Modeling Using A Piecewise Linear Representation of The Occupancy Function"
Download the dissertation: from Collections Canada, ResearchGate, videorecognition.com
This thesis considers the task of building world models from uncertain range data. We study the occupancy approach, which is one of the most popular approaches used for this task. We identify three problems of this approach which prevent it from being used for building 3D world models. The thesis aims to resolve these problems.
The first problem concerns the design of sensor models which assign the values of uncertainty to registered range data. Vision-based sensors are the most affordable sensors capable of registering 3D range data. However, their sensor models are not known or are very difficult to calculate using probability theory. In the thesis we propose a new approach for building visual sensor models which uses evidence theory. This approach allows one to efficiently build sensor models of unreliable, inexpensive video systems by employing stereo error analysis. We present the design of an inexpensive visual range sensor which consists of a single off-the-shelf video camera. This visual sensor is shown to be very suitable for world exploration problems.
The second problem deals with the combination rule, which combines uncertainty values obtained from different range data. Approximations of the Bayesian and Dempster-Shafer rules, which are the common rules used in the occupancy approach, in many cases assume the independence of range data, contrary to the usual situation. In the thesis, we develop a new technique for combining range data which is based on linear regression. This technique does not make assumptions about the data and can therefore be applied to combining dependent range data like those provided by a single camera range sensor.
Finally, the third problem concerns the redundancy of stored and processed data, which results from using the grid representation of the occupancy function. In the thesis we establish a new framework for representing the occupancy function in a parametric way using piecewise linear surfaces. This framework, which is the major thrust of the thesis, uses the developed techniques for registering and combining visual range data, and is tested on both simulated and real range data. The advantages and the limitations of the proposed framework are studied. Besides being more optimal space-wise, this framework is also shown to be more efficient for map extraction and world exploration.
While much remains to be done in the area we believe that the proposed strategies for building sensor models, combining uncertain range data, and using parametrically represented occupancy functions provide the basis for new applications of the occupancy approach and will promote the development of this approach in both world modeling and robot navigation.
Dans la présente thèse, nous considerons la tâche de construire des modèles de l'espace à partir de données télémetriques incertaines. Nous étudions l'approche d'occupation, qui est l'approche la plus populaire pour cette tâche. Nous identifions trois problèmes propres à cette approche qui l'empêchent de pouvoir être utilisée pour construire des modéles de l'espace tridimensionels. Le but de cette thèse est de résoudre ces problèmes.
Le premier problème est relié à la conception de modèles de capteur qui determinent les valeurs d'incertitude au données télémetriques registrées. Les capteurs qui sont basés sur les signaux optiques sont les capteurs les plus accessibles capables du registre des données télémetriques tridimensionelles. Cependant, leurs modèles de capteur ne sont pas connus ou sont très difficiles à calculer en utilisant la théorie des probabilités. Dans cette thèse, nous proposons une nouvelle approche pour construire des modèles de capteur optique basée sur la théorie de l'évidence. Cette approche permet de construire efficacement des modèles de capteur de systèmes optiques peux coûteux et peux fidèles en utilisant l'analyse de l'erreur du système stereo. Nous présentons la conception d'un capteur télémetrique fait à partir d'une videocamera générique. On fait la démonstration de l'adéquation de ce capteur optique pour des problèmes d'exploration de l'espace.
Le deuxième problème a rapport avec la règle qui combine des incertitudes obtenues à partir de différentes données télémetriques. Des approximations aux règles bayésienne et de Dempster-Shafer, qui sont les règles générales utilisées dans l'approche d'occupation, assument souvent l'indépendence des données télémetriques, contrairement à la situation habituelle. Dans cette thèse, nous développons une nouvelle technique pour combiner des données télémetriques basée sur la règression. Cette technique n'assume pas l'indépendance des données et peut donc être appliquée pour combiner des données télémetriques telles que celles obtenues par le capteur fait à partir d'une vidéocamera.
Finalement, le troisième problème est relié à la redondance des données enregistrées et traitées, qui résulte de l'utilisation de la représentation de grille de la fonction d'occupation. Dans cette thèse nous établissons un nouveau cadre pour représenter la fonction d'occupation d'une voie paramètrique en utilisant les surfaces linéaires. Ce cadre, qui est la principale contribution du présent travail, utilise les techniques développées pour régistrer et combiner des données télémetriques optiques, et est testé sur des données télémetriques simulées et réelles. En plus d'utiliser beaucoup moins de mémoire, on démontre aussi que ce cadre est plus efficace pour l'extraction de la carte de navigation.
Si bien il reste beaucoup à faire dans ce domain, nous pensons que les stratégies présentées pour construire des modéles de capteur, tout en combinant des donnes télémetriques incertaines, et en utilisant des fonctions d'occupation à représentation paramétrique, fournissent la base pour de nouvelles applications de l'approche d'occupation et stimuleront le développement de cette approche dans la modelisation de l'espace et la navigation robotique.
Contents:
- Chapter 1 - "Introduction",
- Chapter 2 - "Modeling From Bad Data. Occupancy Approach"
- Chapter 3 - "Visual Sensor for Mobile Robot World Exploration"
- Chapter 4 - "Combining Uncertain Range Data Using Regression"
- Chapter 5 - "Using Piecewise Linear Representation for World Exploration"
- Chapter 6 - "The Framework and Robot Boticelli: Implementation of the Ideas"
- Chapter 7 - "Conclusion" (Contributions and Future Work)
- Bibliography and Appendix (Eppipolar lines of single-camera stereo)
Here is the Presentation made for thesis defence: PhD.ppt (1.4Mb), and here is the presenter - PhD.jpg
Demos : Meet "Botticelli" - single eye robot designed for autonomous world exploration and navigation
- Stereo-rig setup and range data acquisition" local copy: 4.2Mb,
- The robot autonomously finds its way searching for an object in a room (4.0Mb) - more
- On YouTube
Publications
- Vision-based Occupancy Modeling (D.O. Gorodnichy, W.W. Armstrong), ISR2000
- Abstract: The occupancy approach for world modeling is commonly used under the following circumstances: 1) when no geometric constraints can be imposed on the environment, 2) when the environment is changing, 3) when range sensors are not reliable, and/or 4) when computation time is critical (e.g. as in mobile robotics). Vision is commonly used for the following purposes: 1) building 3D models, 2) making modeling faster, 3) making modeling affordable, and/or 4) modeling large scale environments. Vision-based occupancy modeling however has a lot of unresolved problems. This paper summarizes these problems and presents a new framework based on evidence theory and regression, which allows one to build large-scale 3D models from uncertain visual range data
- ***
- "Single Camera Stereo for Mobile Robots" (D.O. Gorodnichy, W.W. Armstrong), Vision Interface (VI'99) conference proceedings, Quebec, May 18-21, 1999.
- Abstract: This paper introduces a single-camera-based stereo vision system used for creating local 3D occupancy models. The design of the system is described, the range data error analysis is presented and the sensor model which assigns the values of evidence to the registered data is built. The application of the proposed system for mobile robot world exploration is shown. Data obtained by running a single camera mobile robot are presented.
- botstereo.ps.gz (445Kb)
- ***
- "On Using Regression in Range Data Fusion" (D.O. Gorodnichy). Canadian Conference on Electrical and Computer Engineering (CCECE'99) proceedings, May 9-12, Edmonton, 1999
- Abstract: In this paper we consider an occupancy-based approach for range data fusion, as it is used in mobile robotics. We identify two major problems of this approach. The first problem deals with the combination rule which in many cases assumes the independence of range data, contrary to the usual situation. The second problem concerns the redundancy of stored and processed data, which results from using the grid representation of the occupancy function and which is the main obstacle to building 3D occupancy world models.
- We propose a solution to these problems by proposing a new range data fusion technique based on regression. This technique uses the evidence theory in assigning occupancy values, which we argue is advantageous for fusion, and builds the occupancy function by fitting the sample data provided by a sensor with a piecewise linear function.
- Having developed a general framework for our approach, we apply it to building 3D occupancy models from visual range data, where the models are used for navigating a robot in an unknown environment.
- botfusion.ps.gz (294Kb)
- ***
- "A Parametrical Alternative for Grids in Occupancy Based World Modeling" (D.O. Gorodnichy, W.W. Armstrong). Quality Control by Artificial Vision Conference (QCAV'99) conference proceedings, Quebec, May 18-21, 1999
- Abstract: In the paper, we consider an occupancy-based approach for range data fusion, as it is used in mobile robotics. We tackle the major problem of this approach, which is the redundancy of stored and processed data caused by using the grid representation of the occupancy function, by proposing a parametric piece-wise linear representation. When applied to vision-based world exploration, the new representation is shown to have advantages, which include its suitability for radial range data, its efficiency in representing and fusing range data, and its convenience for navigation map extraction. The proposed technique is implemented on a mobile robot, Boticelli. The results obtained from running the robot are presented.
- botmodels.ps.gz (240Kb) and botmodels.pdf (165Kb)
- ***
- "Reinforcement Learning for Autonomous Robot Navigation" (W.W. Armstrong, B. Coghlan, D.O. Gorodnichy), International Joint Conference on Neural Networks (IJCNN'99) , Washington DC, July 21-23, 1999.
- Abstract: The Boticelli project has the goal of showing the usefulness of piecewise linear functions (PLFs) in various aspects of autonomous robot navigation. One aspect is world modeling, where the 3D occupancy function is efficiently represented a s a PLF. Another aspect is using a PLF as a representation of the value function in reinforcement learning. The present paper overviews the projects and demonstrates that PLF approximating a solution to Bellman's equation can support robot motion planning.
- Handout(1.2MB)
Neural Networks for Face Recognition and Tracking (Term Project for Image Processing course)
- Bibliography: NNforFRbib.tex
- "Adaptive Logic Networks for Facial Feature Detection"(D.O.Gorodnichy, W.W.Armstrong, X. Li), Lecture Notes in Computer Science, Vol 1311 (Proc. of 9th Intern. Conf. on Image Analysis and Processing (ICIAP'97), Florence, Italy, Sept. 1997, Vol. II), pp. 332-339, Springer.
- Abstract: The task of automatic facial feature detection in frontal-view, ID-type pictures is considered. Attention is focused on the problem of eye detection. A neural network approach is tested using adaptive logic networks, which are suitable for this problem on account of their high evaluation speed on serial hardware compared to that of more common multilayer perceptrons. We present theoretical reasoning and experimental results. The experiments are carried out with images of different clarity, scale, lighting, orientation and backgrounds.
- Keywords: neural network, adaptive logic network, face recognition, eye detection.
- Compressed postscript file (130Kb): aln_eyes.ps.gz
This Ph.D. thesis required conducting research in four areas:
Computer Vision (Problem 1) - to design an affordable range sensor capable of registering 3D data, which includes defining a sensor model and developing a paradigm for combining registered range data.
Uncertainty in AI (Problem 2) - to suggest and investigate a new approach to fuse uncertain range data, based on linear regression, as a solution to the problem of combining dependent and sparce range data.
World Modeling (Problem 3) - to suggest and investigate a parametric alternative to a grid-based approach in world modeling and map building, as a solution to the problems of redundancy of processed data and inefficientcy of grids for navigation
Machine Learning (Problem 4) - to explore reinforcement learning in navigation planning, where reinforcement learning is accomplished with an Adaptive Logic Network technique, which is known to be fas
Other papers on Neural Networks at University of Alberta
Applying Hopfield Neural Networks for Artificial Intelligence problems (Term Project for AI course)
Abstract: Artificial Intelligence (AI) is known to be rich with problems, where finding the solution by conventional search methods is computationally intensive. The time required is often exponential to the number of variables. Principally different way of searching for the decision is to build a dynamic self-organizing system, where the stable states correspond to the desirable solutions. In the paper the possibility of using such nondeterministic methods for resolving Artificial Intelligence problems is studied. It is shown why Hopfield Neural Networks (HN) are so suitable for the role of such a dynamical system. The following approaches of applying Hopfield Neural Networks for solving AI problems are discussed: the Energy approach, the Probability approach, the Graph approach. The last approach is based on using a Hopfield clique Network, introduced by Jagota in 1990. This network is known by the property that its stable states are exactly the maximal clique of the underlying graph. The purpose of the paper is to give a clear idea of how HN can be applied for different AI problms All described approaches are illustrated with concrete examples.. The main theorems .needed for understanding the mechanisms .are stated and the ideas of their proofs are presented..The areas .where these approaches are applicable and where not .are shown..
Other use of Adaptive Logic Neural Networks
ALNs for Face Recognition and Tracking
Bibliography: NNforFRbib.tex
"Adaptive Logic Networks for Facial Feature Detection"(D.O.Gorodnichy, W.W.Armstrong, X. Li), Lecture Notes in Computer Science, Vol 1311 (Proc. of 9th Intern. Conf. on Image Analysis and Processing (ICIAP'97), Florence, Italy, Sept. 1997, Vol. II), pp. 332-339, Springer.
Abstract: The task of automatic facial feature detection in frontal-view, ID-type pictures is considered. Attention is focused on the problem of eye detection. A neural network approach is tested using adaptive logic networks, which are suitable for this problem on account of their high evaluation speed on serial hardware compared to that of more common multilayer perceptrons. We present theoretical reasoning and experimental results. The experiments are carried out with images of different clarity, scale, lighting, orientation and backgrounds.
Keywords: neural network, adaptive logic network, face recognition, eye detection.
Compressed postscript file (130Kb): aln_eyes.ps.gz
Who's Who: reference links, techniques and names (updated continuously)
Computer Vision (Problem 1)
(Using the knowledge of the stereo for calculating confidence values)
- 3D structure from 2D images: Kanatani, Amar Mitiche (INRS) , Faugeras, Ayache.
- Affine model, Euclidian Reconstruction wigh a single camera: INRIA, Rhone-Alpes (Radu Horaud, Roger Mohr)
- Essential Matrix, ego-motion estimation: Tomas Svoboda, Richard Hartley
- Error Analysis: L. Mathies, S. Blostein, J. Miura and Y. Shirai (for Motion Planning)
- Robust Stereo Matching: C.Menard , Ralf Henkel (Interesting coincidence - Ralf's Ph.D. thesis was also on Pseudo Inverse Neural Networks and now he is also in stereo)
Fusion of Uncertain Sensor Data (Problem 2)
(Source of uncertainty in vision - lense distortion & limited resolution)
(Data are dependent due to sensor models and using the same sensor)
- Reasoning With Uncertainty in AI (Evidence- vs Probability- approaches)
- Reasoning with Uncertainty in Robot Navigation On-line Proceedings (RUR'99)
- Uncertainty in AI (on-line proceedings)
- UAI Forum Page has a very interesting discussion on this!
- Frans Voorbraak : on justification of Demspter-Shafer; DS vs Bayesian vs Partial Probability Theory; for robotics
- Reasoning with Uncertainty project
- Robert Hummel, Bayesian foundation to Demspter-Shafer method; Relationship of DS Kalman Filtering
- Dempster-Shafer Theory for Sensor Fusion in Autonomous Mobile Robots by Robin R. Murphy : temporal fusion, sensor failure issue
- Approximations for Dempster-Shafer rule: Mathias Bauer
- Defect in Dempster-Shafer theory (on {E,F} example: degree of belief (which if function of accumulating weight of evidence) doesn't converge to chance: Bel != Pr=t^/t): Pei Wang
- Richard Szeliski: Bayesian; in low-level vision
- Rules for combining uncertain range data in mobile robotics
- Probabilistic approaches:
- Bayesian (assuming that the occupancy grid is a MRF of order 0, i.e. cell states are independent): Alberto Elfes, Hanz Moravec
- Modifications of Bayesian (): Kurt Konolige (sonar)
- EKF (i.e. using second moments) : H. Durrant-Whyte, J. Crowley
- Bayes Networks: A. Berler, S.E. Shimony (no indepependency assumption)
- Decentralized (Markovian model, Entropy) approach: O. Basir (sonar + vision)
- HMM2 for Place Recognition and Navigation: INRIA Lorraine
- Sensor failure detection (Bayesian,occupancy grid based): Martin Soika
- Evidential approaches:
- Dempster-Shafer rule: H. Durrant-Whyte
- Ad-hoc (DS approximations) rules: Johann Borenstein (From motion)
- Non-Axiomatic Reasoning System: Pei Wang (theory only for {F,E}, for repeatedly counted evidence)
- Probabilistic approaches:
Occupancy (aka Evidence, Certainty) World Models (Problem 3)
Grids (Maps) and other Occupancy-based World Models (Representations)
- 3D Occupancy models
- 3D Occupancy Grids: Hanz Moravec at RI CMU)
- Octree Models from Laser Range data: Denis Laurendeau at Laval University,
- For mesh models and object recognition: Visual Information Technology at NRC
- "Multiresolution probabilistic occupancy grid modeling of 3-D space for safe path planning" thesis by P.Payeur
- Incorporating uncertainty, occupancy vs volumetric models: IRIS Lab, The University of Tennessee
- "Volumetric modeling through fusion of multiple range images with confidence estimate" thesis D. Elsner
- As a set of stochastic obstacle regions Mk:{mean,cov,N}, clustering from laser scans, e/v-s of covariance matrix: Pohang University of Science and Technology, Korea
Vision (or other range sensor) Based Mobile Robot projects
- University of Alberta ("Boticelli"): "Boticelli: a single camera mobile robot using new approaches to range data fusion, world modeling, and navigation planning" (W.W. Armstrong, D.O. Gorodnichy, B. Coghlan), DRES report, Dec. 1998 ( Compressed postscript (2 MB))
- University of British Columbia ("Spinoza"): trinocular stereo.
- McGill University (Gregory Dudek): extracting areas of interest.
- University of Bonn ("Rhino"): combining occupancy and topological, position rectification, disparity between vertical lines.
- University of Karlsruhe ("Priamos", "Mortimer"): digital cameras (30% noise reduction), 3D maps from a trinocular stereo (edges) and plannar laser scanner (up to 10 m +-2cm), KF.
- Institut für Neuroinformatik in Bochum ("Arnold") : with an arm, cameras only (2 b/w 90 fov "periphery" + 2 coloured 33 fow "fovea", on pan/tilt/vergence unit), color+vert.edges
- Robuter (J.Crowley): active stereo + sonar, Estimation theory & KF for correcting the position
- Carnegie Mellon University, Robotics Institute (H. Moravec): 3D occupancy grids, image rectification, interest operator
- CS Dept at Carnegie Mellon University, : Amelia, Xavier
- University of Pennsylvania, GRASP Lab: lense distorsion rectification
- Navy Center for Applied Research in Artificial Intelligence ("Ariel", "Coyote", Brian Yamauchi): frontier-based exploration, continuous localization
- Stanford CS Robotics Lab: www demo, Nomad,
- University of Osaka: Shirai Lab: Vision-Motion Planning based on (probabilistic) Dual-Camera Stereo Uncertainty/Evidence level (using local disparity hystogram for quantization and mismatching(!) error analysis in measuring the distance between features, resolving ambiguities)
- In Japan (map of robotics research)
- Okayama University of Science: Lashkia Lab (recognition, learning)
- University of Osaka: Yachida Lab (Dr. Yagi)
- [ Meiji University : Takeno Lab.], [ Tsukuba University : Yuta Lab.]
- Nagoya Institute of Technology: the fastest laser range scanner (for faces)
- Dr. N. Kita : Active vision, tracking and depth estimation by two vergent cameras
- Georgia Tech Mobile Robot Laboratory: no fusion or uncertainty,neuro-based, multy-agent.
- AAAI Robot Competitions
- Reasoning With Uncertainty in AI (Evidence- vs Probability- approaches)
- Reasoning with Uncertainty in Robot Navigation On-line Proceedings (RUR'99)
- Uncertainty in AI (on-line proceedings)
- UAI Forum Page has a very interesting discussion on this!
- Frans Voorbraak : on justification of Demspter-Shafer; DS vs Bayesian vs Partial Probability Theory; for robotics
- Reasoning with Uncertainty project
- Robert Hummel, Bayesian foundation to Demspter-Shafer method; Relationship of DS Kalman Filtering
- Dempster-Shafer Theory for Sensor Fusion in Autonomous Mobile Robots by Robin R. Murphy : temporal fusion, sensor failure issue
- Approximations for Dempster-Shafer rule: Mathias Bauer
- Defect in Dempster-Shafer theory (on {E,F} example: degree of belief (which if function of accumulating weight of evidence) doesn't converge to chance: Bel != Pr=t^/t): Pei Wang
- Richard Szeliski: Bayesian; in low-level vision
- Rules for combining uncertain range data in mobile robotics
- Probabilistic approaches:
- Bayesian (assuming that the occupancy grid is a MRF of order 0, i.e. cell states are independent): Alberto Elfes, Hanz Moravec
- Modifications of Bayesian (): Kurt Konolige (sonar)
- EKF (i.e. using second moments) : H. Durrant-Whyte, J. Crowley
- Bayes Networks: A. Berler, S.E. Shimony (no indepependency assumption)
- Decentralized (Markovian model, Entropy) approach: O. Basir (sonar + vision)
- HMM2 for Place Recognition and Navigation: INRIA Lorraine
- Sensor failure detection (Bayesian,occupancy grid based): Martin Soika
- Evidential approaches:
- Dempster-Shafer rule: H. Durrant-Whyte
- Ad-hoc (DS approximations) rules: Johann Borenstein (From motion)
- Non-Axiomatic Reasoning System: Pei Wang (theory only for {F,E}, for repeatedly counted evidence)
- Probabilistic approaches:
Occupancy (aka Evidence, Certainty) World Models (Problem 3)
Grids (Maps) and other Occupancy-based World Models (Representations)
- 3D Occupancy models
- 3D Occupancy Grids: Hanz Moravec at RI CMU)
- Octree Models from Laser Range data: Denis Laurendeau at Laval University,
- For mesh models and object recognition: Visual Information Technology at NRC
- "Multiresolution probabilistic occupancy grid modeling of 3-D space for safe path planning" thesis by P.Payeur
- Incorporating uncertainty, occupancy vs volumetric models: IRIS Lab, The University of Tennessee
- "Volumetric modeling through fusion of multiple range images with confidence estimate" thesis D. Elsner
- As a set of stochastic obstacle regions Mk:{mean,cov,N}, clustering from laser scans, e/v-s of covariance matrix: Pohang University of Science and Technology, Korea
Vision (or other range sensor) Based Mobile Robot projects
- University of Alberta ("Boticelli"): "Boticelli: a single camera mobile robot using new approaches to range data fusion, world modeling, and navigation planning" (W.W. Armstrong, D.O. Gorodnichy, B. Coghlan), DRES report, Dec. 1998 ( Compressed postscript (2 MB))
- University of British Columbia ("Spinoza"): trinocular stereo.
- McGill University (Gregory Dudek): extracting areas of interest.
- University of Bonn ("Rhino"): combining occupancy and topological, position rectification, disparity between vertical lines.
- University of Karlsruhe ("Priamos", "Mortimer"): digital cameras (30% noise reduction), 3D maps from a trinocular stereo (edges) and plannar laser scanner (up to 10 m +-2cm), KF.
- Institut für Neuroinformatik in Bochum ("Arnold") : with an arm, cameras only (2 b/w 90 fov "periphery" + 2 coloured 33 fow "fovea", on pan/tilt/vergence unit), color+vert.edges
- Robuter (J.Crowley): active stereo + sonar, Estimation theory & KF for correcting the position
- Carnegie Mellon University, Robotics Institute (H. Moravec): 3D occupancy grids, image rectification, interest operator
- CS Dept at Carnegie Mellon University, : Amelia, Xavier
- University of Pennsylvania, GRASP Lab: lense distorsion rectification
- Navy Center for Applied Research in Artificial Intelligence ("Ariel", "Coyote", Brian Yamauchi): frontier-based exploration, continuous localization
- Stanford CS Robotics Lab: www demo, Nomad,
- University of Osaka: Shirai Lab: Vision-Motion Planning based on (probabilistic) Dual-Camera Stereo Uncertainty/Evidence level (using local disparity hystogram for quantization and mismatching(!) error analysis in measuring the distance between features, resolving ambiguities)
- In Japan (map of robotics research)
- Okayama University of Science: Lashkia Lab (recognition, learning)
- University of Osaka: Yachida Lab (Dr. Yagi)
- [ Meiji University : Takeno Lab.], [ Tsukuba University : Yuta Lab.]
- Nagoya Institute of Technology: the fastest laser range scanner (for faces)
- Dr. N. Kita : Active vision, tracking and depth estimation by two vergent cameras
- Georgia Tech Mobile Robot Laboratory: no fusion or uncertainty,neuro-based, multy-agent.
- AAAI Robot Competitions