Learning by Composition and Exploration
Modern machine learning is largely predicated on several tacit beliefs. First, it is based on Turing’s 1950 assertion that “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a note-book as one buys it from the stationers. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child-brain that something like
it can be easily programmed. The amount of work in the education we can assume, as a first approximation, to be much the same as for the human child. ” In other words, it is assumed that humans learn by feeding their brains enough data, which for vision, means visual stimuli.
Second, the ‘feeding’ is done passively, that is, the system is a passive observer, much like a barnacle on a rock that simply waits for nourishment to come to it.
Third, the nature of learning is constant throughout the learning process, it does not change its mechanism or form.
These beliefs are all false when compared to human learning.
This project seeks to develop a new learning paradigm that takes these three beliefs and revises them so they are consistent with what is known about human perceptual and cognitive learning. The new paradigm which we term “Learning by Composition and Exploration – LCE” will be applied to the learning of Cognitive Programs, the updated version of the classic Ullman Visual Routines (see Cognitive Programs project page).
Basic tenets of this paradigm include:
• An intelligent system acquires data as it is required, while always being vigilant to its environment, and reasons about its role in fulfilling tasks.
• It features closed loop control where the system is an active agent within its environment and where the system interacts with it
• The system is attentive: it dynamically tunes itself for the task and context at hand. It learns from these actions to improve its performance for the next tasks (in this sense there is some commonality with the basics of reinforcement learning) • Active inductive inference seems a necessary mechanism to drive active exploration and requires a world model from which to draw inferences and direct their testing.