Programming by demonstration (PbD) allows matching the kinematic movements of a robot with those of a human. The presented Continuous Goal-Directed Actions (CGDA) is able to additionally encode the effects of a demonstrated action, which are not encoded in PbD. CGDA allows generalization, recognition and execution of action effects on the environment. In addition to analyzing kinematic parameters (joint positions/velocities, etc.), CGDA focuses on changes produced on the object due to an action (spatial, color, shape, etc.). By tracking object features during action execution, we create a trajectory in an n-dimensional feature space that represents object temporal states. Discretized action repetitions provide us with a cloud of points. Action generalization is accomplished by extracting the average point of each sequential temporal interval of the point cloud. These points are interpolated using Radial Basis Functions, obtaining a generalized multidimensional object feature trajectory. Action recognition is performed by comparing the trajectory of a query sample with the generalizations. The trajectories discrepancy score is obtained by using Dynamic Time Warping (DTW). Robot joint trajectories for execution are computed in a simulator through evolutionary computation. Object features are extracted from sensors, and each evolutionary individual fitness is measured using DTW, comparing the simulated action with the generalization.