In this paper, we present a novel methodology to obtain imitative and innovative postural movements in a humanoid based on human demonstrations in a different kinematic scale. We collected motion data from a group of human participants standing up from a chair. Modeling the human as an actuated 3-link kinematic chain, and by defining a multi-objective reward function of zero moment point and joint torques to represent the stability and effort, we computed reward profiles for each demonstration. Since individual reward profiles show variability across demonstrating trials, the underlying state transition probabilities were modeled using a Markov chain. Based on the argument that the reward profiles of the robot should show the same temporal structure of those of the human, we used differential evolution to compute a trajectory that fits all humanoid constraints and minimizes the difference between the robot reward profile and the predicted profile if the robot imitates the human. Therefore, robotic imitation involves developing a policy that results in a temporal reward structure, matching that of a group of human demonstrators across an array of demonstrations. Skill innovation was achieved by optimizing a signed reward error after imitation was achieved. Experimental results using the humanoid HOAP-3 are shown.