In household settings, Learning from Demonstration techniques can enable end-users to teach their robots new skills. Furthermore, it may be necessary for the demonstrations to be accessible through a straightforward setup, such as a single visual sensor. This study presents a pipeline that uses a single RGB-D sensor to demonstrate movements taking into account all the key points of the human arm to control a nonanthropomorphic arm. To perform this procedure, we present the TAICHI algorithm (Tracking Algorithm for Imitation of Complex Human Inputs). This method includes detecting key points on the human arm and mapping them to the robot, applying Gaussian filtering to smooth movements and reduce sensor noise, and utilizing an optimization algorithm to find the nearest configuration to the human arm while avoiding collisions with the environment or the robot itself. The novelty of this method lies in its utilization of key points from the human arm, specifically the end-effector and elbow, to derive a similar configuration for a non-anthropomorphic arm. Through tests encompassing various movements performed at different speeds, we have validated the efficacy of our method and confirmed its efficiency in replicating the desired outcomes on the robot’s end-effector and joints.