From Science RoboticsView
Generalizing from a limited number of training examples: Deep neural networks and other machine learning algorithms are known to require an enormous amount of training, whereas humans are able to generalize from as little as one example. We believe that generalizing from a few examples lies at the core of intelligence. So far, our models have required only a handful of examples to train.
Although Deep Learning research has made advances in unsupervised learning techniques, recent successes are attributed to supervised learning with large amounts of data. We believe that unsupervised learning will be important for a large class of problems, and most of our efforts are focused on unsupervised learning techniques.
Using the cortex as a source of inductive biases and constraints; it is a widely held view that the learning efficiency and generalization of the brain come from its inductive biases. The organization of circuits in the neocortex provides rich clues regarding the inductive biases and inference algorithms, and the investigation of these clues in the context of the deficiencies of existing models would enable the discovery of new network architectures, learning algorithms, and inference mechanisms.
Like many other researchers, we believe that network architecture plays a significant role in generalization. Many of our experiments are designed towards the question of uncovering insights related to network micro-architecture. We emphasize parts-based representations and compositionality, similar to several other researchers building grammar-based models. In addition, the organization of cortical micro-circuitry provides rich clues about the nature of potential modifications. Our efforts in this direction have already been very fruitful in discovering a new network architecture that provides tight control for invariance-selectivity tradeoffs.