This paper presents the initial steps towards a robot imagination system, which aims at providing robots with the cognitive capability of imagining how a set of actions can affect a robot's environment, even if the robot has never seen the specific set of actions applied to its environment before. This robot imagination system is part of a human-inspired and goal- oriented infrastructure, which first learns the semantics of actions by human demonstration, and is then capable of performing the inverse semantic reconstruction process through mental imagery. A key factor in this system is distinguishing how different actions affect different features of objects in the environment. Simple probabilistic and other machine learning methods, tested to perform this first step of the inference process, are presented and compared in this paper. The inference of results of composed actions is generated as the sum of the contributions of each of the query word components. As an initial prototype, the actual learning process has been performed using synthetically created minimalistic environments as datasets, and a limited amount of training words for the learning process.