Humanoids can learn motor skills through the programming by demonstration framework, which allows matching the kinematic movements of a robot with those of a human. Continuous goal-directed actions (CGDA) is a framework that can complement the paradigm of robot imitation. Instead of kinematic parameters, its encoding is centered on the changes an action produces on object features. The features can be any measurable characteristic of the object such as color, area, etc. The execution of actions encoded as CGDA allows a robot-configuration independent achievement of tasks, avoiding the correspondence problem. By tracking object features during action execution, we create a trajectory in an n-dimensional feature space that represents object temporal states, allowing generalization, recognition, and execution of action effects on the environment. Experiments have been performed, using a humanoid robot in a simulated environment. Evolutionary computation was used for joint parameter calculation of a humanoid robot. The objective is to generate a motor trajectory which leads to a feature trajectory equal to the objective one. In one of the experiments, the robot performs a spatial trajectory based on spatial object features. In a new experiment, the robot paints a wall by following a color feature encoding.