The architecture of human face is complex consisting of 268 voluntary muscles that perform coordinated action to create real-time facial expression. In order to replicate facial expression on humanoid face by utilizing discrete actuators, the first and foremost step is the identification of a pair of origin and sinking points (SPs). In this paper, we address this issue and present a graphical analysis technique that could be used to design expressive robotic faces. The underlying criterion in the design of faces being deformation of a soft elastomeric skin through tension in anchoring wires attached on one end to the skin through the sinking point and the other end to the actuator. The paper also addresses the singularity problem of facial control points and important phenomena such as slacking of actuators. Experimental characterization on a prototype humanoid face was performed to validate the model and demonstrate the applicability on a generic platform.