Intrinsic motivation refers to behavior driven by internal rewards. Intrinsically motivated computational learning, also known as artificial curiosity, causes an autonomous agent to act to optimize its learning about itself and its environment by receiving internal rewards based on prediction errors. This type of learning, which is based on the reinforcement learning paradigm, has been mostly implemented in developmental robotics, in which robots interact and learn about themselves and their environment to expedite learning compared to taking random actions.
Expedited learning in rapidly changing environments is also highly relevant in big-data science, where the process of building machine learning models can be very challenging due to continuous changes in data structures and the need for human interaction to tune the variables and models over time. This work presents a novel method of intrinsically motivated learning and curiosity loops to learn the data structures in large and varied datasets. An autonomous agent learns to select a subset of relevant features in the data (i.e., feature selection) to be used later for model construction. The agent optimizes its learning about the data structure over time without requiring external supervision.
Unlike other feature selection methods, which are restricted to either batch or online learning setting, the proposed method can be applied for both batch and online learning, it is not restricted to a specific predictor, and has other advantages such as running time and responsiveness to data changes. Experiments on three public datasets show that the proposed method, called the Curious Feature Selection (CFS) algorithm, positively impacts the accuracy of learning models using features selected by the CFS algorithm.