Work towards PhD degree under the supervision of Prof. Vadim Indelman
When: 24.5.2021 at 15:30
Abstract: The fundamental goal of artificial intelligence (AI) research is to allow agents and robots to autonomously plan and execute their actions. To achieve reliable and robust performance, these agents must account for real-world uncertainty. There are multiple possible sources for such uncertainty, including dynamic environments, in which unpredictable events might occur; noisy or limited sensor measurements, such as an imprecise GPS signal; and inaccurate delivery of actions. Practically, these settings require reasoning over high-dimensional probabilistic states, known as “beliefs”, representing the knowledge of the agent on the world. To decide what would be the optimal and “safest” course of action, the agent should probabilistically predict the future development of its belief, considering a set of multiple candidate actions or policies. However, such belief propagation over long horizons requires computationally demanding optimization of numerous inter-connected variables. Thus, real-time decision making under uncertainty proves to be a challenge, especially when having a limited processing power, which is often the case with mobile robots. Hence, in our work, we focused on developing methods to reduce the computational complexity of this decision making problem, while providing formal optimality guarantees. In this talk, we will present several of the novel techniques we have developed: First, we will prove and demonstrate that relying on a sparse approximation of the agent’s belief, which is represented with a high-dimensional matrix in the Gaussian case, can significantly reduce the complexity of belief propagation, while still maintaining optimality (“action consistency”); such sparsification is only utilized in the planning stage, and thus does not compromise the quality nor efficiency of the state estimation. We will then show that when the action domain is large, using bounded approximations, we can easily eliminate unfit actions, while sparing the need to exactly evaluate all the candidate actions. Finally, we will introduce PIVOT: Predictive Incremental Variable Ordering Tactic. Uniquely to this approach, we optimize the representation of the present belief (matrix) based on the predicted state development in the future, and not based on the current knowledge; this technique is not only able to reduce the complexity of decision making, but also to reduce the cost of “loop closing” when re-observing scenes during action execution. We will demonstrate the benefits of these methods in the solution of autonomous navigation and active Simultaneous Localization and Mapping (SLAM) problems, where we manage to significantly reduce computation time, without compromising the quality-of-solution.