Abstract

Efficient robot integration can be realized by matching real and virtual robots, and accurate robot models can be generated by kinematic parameter calibration. End-effector pose selection for pose measurement to discover the positioning errors is critical in kinematic parameter calibration. Ideal pose selection maximizes calibration accuracy for a defined measurement uncertainty and optimizes measurement cost and utility. In the design of the pose selection process, observability indices are widely accepted criteria for effective pose selection to evaluate calibration performance. Observability indices represent the effect of uncertainty in the measured end-effector poses on the calibrated parameters. However, unlike expensive direct measurement using laser, low-cost camera-based kinematic calibration estimates the end-effector poses from the marker points in the captured image. The variance of the detected marker positions biases the end-effector poses and, eventually, the calibrated parameters. Therefore, this study proposes extended observability indices for pose selection based on this bias to realize accurate calibration with a low-cost camera. The target observability index is O1, a scale-free, reliable index used in kinematic calibration. Considering the visual bias, we extended it as Ov1. This study evaluated Ov1 by comparing the positioning accuracies calibrated on poses selected by maximizing it, original O1, O3 known as the best criterion to restrain the end-effector positioning uncertainty, and Ov3, which is the extended O3 for consistency. A ball-bar test showed that the poses selected by the index Ov1 exhibited higher positioning accuracy than the other indices.

1 Introduction

Robotic manipulators are being increasingly deployed in the manufacturing industry as their performance, affordability, and safety improvements. In addition, the variable demand inherent in agile manufacturing requires manipulators to work flexibly and accurately on various tasks. For example, in flexible manufacturing, a manipulator needs to determine the appropriate picking pose based on the minor differences among the products. It needs to identify these differences in a simulated environment based on the product model without requiring any manual teaching as required for current automation products. As shown in this example, accurate modeling of manipulators to achieve accurate positioning is the key to realizing flexible manufacturing with robotic manipulators. Standard commercial manipulators exhibit high repeatability in positioning. However, absolute positioning accuracy is lower than repeatability. This lower absolute positioning accuracy is primarily because of errors in the kinematic model caused by the difference in parameters, such as link lengths and joint angle offsets, between individual robots arising from manufacturing tolerances. Furthermore, other internal uncertainties, such as the degradation of a manipulator, and external uncertainties, such as environmental temperature, change these errors over time. In order to deal with internal and external uncertainties, recently, a new model-free approach that adaptively controls the manipulator based on measurement has been developed [1,2]. This approach can ideally deal with the change in model, including non-parametric factors, quickly. However, because of reliability considerations, the traditional approach to modeling the robot and identifying the model parameters through robot calibration is still widely used, especially in the industrial field.

Robot calibration, including kinematic calibration, for parameters like joint angle and link length, and non-kinematic calibration, for parameters like joint compliance and deformation caused by temperature change, has been the subject of robotics research for decades [36]. In particular, the positioning error caused by a kinematic error is more significant than that of a non-kinematic error [6], and many methods have been proposed for its calibration. The typical first step in these methods is identifying the error between the target pose and the actual pose of a manipulator’s end-effector by measuring it in various manipulator configurations. There are many methods for measuring this error. The first commonly applied approach uses the physical geometrical constraints generated by the closed loop between the end-effector and the reference frame. A coordinate measuring machine [79] and a ball-bar [1013] are the major instruments used for measurement in this approach. This approach enables simple and reliable measurements but requires space to fix the instruments. It also makes it difficult to measure various end-effector poses without frequently relocating the measuring instruments. Another measurement approach is the direct measurement of the end-effector pose. A laser tracker is the most popular and reliable device used in this approach [14,15,9,1618] as it can measure arbitrary end-effector poses with high precision. However, it is expensive and requires a large area for installation, making periodic measurements for daily or monthly maintenance in a manufacturing system difficult.

Kinematic calibration using a monocular camera and a marker placed around the manipulator is an attractive alternative method for the latter approach without the installation of costly measurement devices [1921,16,22]. In these studies, the authors realized kinematic calibration using a monocular camera attached to the end-effector to estimate its pose. The camera captures a checkerboard pattern of known dimensions and estimates the camera pose by stereo-matching the corner points. Balanji et al. [23] proposed a unique calibration method in which a monocular camera fixed in the environment captures a fiducial marker on a three-dimensional structure attached to the end-effector to estimate its pose. Recently, Boby [24] proposed another potent approach that calibrates kinematic parameters directly from the errors between the points on the marker without end-effector pose estimation. One of the fundamental problems of camera-based kinematic calibration is the lower precision in measurements as compared to those of conventional physical constraint-based devices and laser trackers due to unavoidable factors in image processing, such as noise and lens distortion. Moreover, the narrower measurement range because of the camera’s limited field of view (FOV) is also a fundamental limitation compared with calibration using a laser tracker.

Therefore, pose selection before calibration to enable the capture of the marker by a camera is vital to the success of the calibration. The poses must be selected by considering the time required for capturing, space limitation, and especially, their performance. The calibration accuracy, subject to the limitation of the camera accuracy, is the measure of performance of this method. Such robot pose selection has been a major problem not only for camera-based calibration but also for calibration with a laser tracker. Classic calibration studies have proposed several observability indices between the end-effector and the space of robot kinematics for pose selection [2528] because the suppression of the effects of measurement errors on parameter calibration is the same as the problem of improving observability from the parameter space to the measurement space. Furthermore, several methods for obtaining measurement poses to maximize the corresponding indices have been proposed [26,2931]. In camera-based kinematic calibration, Renaud et al. [32] selected measurement poses for the camera-based calibration of a parallel robot by minimizing the observability indices called O2 [25] and O4 [28] based on the pre-measured statistical properties of noise in the camera system. Filion et al. [17] used the observability index called O1 [26] to select robot poses from a dataset of poses measured by their camera-based system and a laser tracker. The method of pre-measuring noise is model-free and versatile, but it requires time-consuming measurements to acquire sufficient noise data for many robot poses. Furthermore, classical observability indices, such as O1, O2, and O4, assume isotropic uncertainty for the end-effector pose. However, the end-effector pose estimation process with a camera and marker points biases the uncertainty because the points shift on the image plane by an amount equal to the uncertainty of image processing. Therefore, there can be more appropriate measurement poses for camera-based calibration that consider this bias. Boby [24] determined a robot’s trajectory in camera-based calibration by identifying the region where the condition number of their identification Jacobian between the error in image points and robot kinematics becomes small. His approach can be regarded as the resultant region was considered the visual bias as a condition number of a single pose. However, the evaluation of all measurement points using the observability index may lead to a more robust camera-based calibration.

This study aims to realize accurate, periodical, camera-based kinematic calibration, such as the daily kinematic parameter tuning to compensate for degradation or temperature change. For this purpose, this study extends the existing observability indices for camera-based calibration by considering the error propagation from the image space to the space of robot kinematics, with the camera identification Jacobian matrix used for the visual servo. Furthermore, this study addresses O1 as the target measure among the five measures, from O1 to O5, based on the report on the best criteria for reducing the variance of kinematic parameters with invariance in scaling. The new extended observability index, named Ov1, realizes pose selection without the error pre-measurement required for the camera-based measurement system. We apply the extended observability index Ov1 to the traditional pose selection procedure and compare the calibration results with those of the poses obtained based on the original observability index O1 to validate the effectiveness of the extension for pose selection for camera-based calibration. Furthermore, this study also contrasted the results using another index O3, the effective index for reducing the variance of end-effector positioning, to evaluate Ov1.

The remainder of this paper is as follows: Sec. 2 describes the camera-based kinematic calibration method assumed in this study before introducing the observability indices. Section 3 proposes extended observability indices by considering the visual bias based on the relationship between the change in the image points and the kinematic parameters. Finally, in Sec. 4, experiments are conducted to verify the effectiveness of the proposed indices, and Sec. 5 concludes the study.

2 Camera-Based Kinematic Calibration

Before introducing extended observability indices for robot pose selection, this section introduces camera-based kinematic calibration using the product of exponentials (POE) we assumed in this study, along with the parameter definitions. Section 2.1 introduces the kinematics required for camera-based kinematic calibration. It consists of kinematics between the local frames of robot links, the tool coordinate frame, the camera image frame, and the world coordinates. Section 2.2 introduces the camera-based kinematics calibration based on the above kinematics.

2.1 Robot and Camera Kinematics.

Kinematics representation using the Denavit–Hartenberg parameters is still the most popular notation in robot geometry. However, this representation has a singularity problem in the context of parameter identification, differing from robot kinematics analysis: when adjacent joint axes are close to parallel, the common normal line is not uniquely defined, and the parameter values become undefined. Although many parameterization methods have been proposed to avoid instability in kinematic calibration [3336], robot kinematics defined by POE formulas can naturally introduce the derivative of a robot’s joint poses for a given robot pose using the relationship between the Lie group and Lie algebra [37,38]. Because of the smoothness in the parameter update for end-effector positioning errors, many recent robot calibration studies have used POE formulas [7,15,39,23]. We represent the robot kinematics by the local POE formula [15] in the following discussion.

Let the target serial link manipulator have n revolute joints. A monocular camera (hand-eye camera) is attached to the end of the robot. The robot’s tool coordinate frame coincides with the camera frame at its focal point. The POE formula uses an exponential map from an element in Lie algebra se(3) to an element in the Lie group SE(3) for the forward kinematics through the joints. The former corresponds to the motion of joints and links, and the latter to the rigid body motion typically represented by a homogeneous transformation matrix. Let ω=[ωx,ωy,ωz] be the angular velocity, and v=[vx,vy,vz] be the translation velocity. ξ^se(3) is expressed as
(1)
The operation ∨ converts ξ^ into ξR6, as follows:
(2)
When ω=1, ξ^ is mapped to a homogeneous transformation matrix, which is an element of SE(3) as
(3)
where I represents the identity matrix. This exponential mapping precisely converts the element in se(3) to the corresponding matrix in SE(3), and its derivation can be found in Sec. 4.4.3 in Ref. [38]. θ is the rotational angle around the rotational axis ω. When ω=0, exp(θξ^) is a simple translational matrix of the homogeneous coordinates, and θ is the amount of translational motion. We can use these smooth six-dimensional vectors to represent rigid body poses for kinematic calibration instead of using local representation with the problems of singularities and ambiguities.
Let qi be the angle of the ith joint connecting the (i − 1)th and the ith links. Let the 0th link be the robot base. The coordinate frame of the ith link is on the ith joint axis. Let giSE(3) be the relative pose of the local coordinate frames on (i + 1)th for the frame of the ith link when qi+1 = 0, and let g0 and gn be the robot base pose for the world coordinate frame and the tool pose for the nth joint, respectively, as depicted in Fig. 1. Thus, the forward kinematics gst(q) is
(4)
where ζ^se(3) denotes the motion of the revolute joint on the link frame. Therefore, exp(qiζ^) represents the pose change caused by the joint motion, and gi represents the pose difference between two adjoining joints. For each homogeneous transformation matrix gi, there exists priR6 such that
(5)
Therefore, a vector pr=[pr0,,prn]R6n+6 describes the kinematic parameters of the robot, which is a concatenation of pr0,,prn. These parameters describe the geometric relationship of the links, including the offset of the joint angles.
Fig. 1
Relationship between frames in robot and camera kinematics
Fig. 1
Relationship between frames in robot and camera kinematics
Close modal
A calibration board with markers is at the specified position relative to the robot. The camera captures the images of the calibration board to estimate its pose relative to the board. The focal length f of the camera and the coordinates of the principal point in the image plane frame [cx,cy] are the camera’s intrinsic parameters. Here, these parameters are assumed to be obtained through camera calibration beforehand. These intrinsic parameters project the point xc=[xc,yc,zc] in the camera frame to the image plane coordinate frame, as follows:
(6)
where u and v are the coordinates of the x-axis and y-axis on the camera image plane, respectively, as shown in Fig. 1. In this paper, for simplicity, the camera frame is also the tool frame. Let gcs be the transformation matrix from the camera frame to the world frame. A point xw=[xw,yw,zw] in the world frame, such as a corner point of the calibration marker, is projected to the camera frame as follows:
(7)
as also depicted in Fig. 1. Equations (6) and (7) provide the positions of the marker points in the image plane for the given marker positions in the world frame when the camera pose is known. gcs also has the parameter pcR6n+6, which is
(8)
and it is also the parameter to be identified during the camera-based calibration. Using these transformations, all the points mapped from the world frame to the image points captured by the camera on the end-effector are represented by six-dimensional vectors.

2.2 Kinematic Calibration From Camera Images.

In camera-based calibration, first, the camera pose must be identified from the image of the calibration board, which has l marker points. For the given marker point set on the calibration board, for estimating the problem to estimate the parameter of the camera pose p^c is the pose estimation from the n point (PnP) problem [40], which is in the following form:
(9)
where [ui,vi] and xwi denote the ith marker point position in the image and world frames, respectively, and f represents the vector function represented by Eq. (6). This optimization minimizes the errors between the detected marker positions and their positions estimated from the camera pose by changing it. The relationship between the change in the image plane point δec for a change in the camera pose δpc is given by
(10)
as shown in Fig. 2(a). Jc is the camera identification Jacobian matrix for a point xc in the camera frame and is given by
(11)
This matrix is also called the interaction matrix in visual servo control [41,42], and it represents the relationship between the camera frame change and image point change. A point’s x, y coordinates in the image plane depend on its camera frame x, y coordinates and depth. Time derivation of this relationship reveals how camera motion affects point positions in image frames, as given in the above matrix. Therefore, the following iterative update of pc reaches the optima of Eq. (9) if its initial guess is sufficiently close to the optimal solution
(12)
where δpc is given by
(13)
where Jci denotes the camera identification Jacobian matrix for the ith marker point, Jci+ denotes its pseudoinverse, and δeci denotes the Euclidean distance error of this point in Eq. (6). From Eq. (13), we can determine the effect of the detected marker point errors on the estimated camera pose.
Fig. 2
Relationship between change in parameters: (a) stacked camera identification Jacobian matrix and (b) robot identification Jacobian matrix
Fig. 2
Relationship between change in parameters: (a) stacked camera identification Jacobian matrix and (b) robot identification Jacobian matrix
Close modal
Let the robot capture the board in m different poses, where m is a sufficiently large number to identify the kinematic parameters of the robot. The solution to the above PnP problem produces the corresponding m camera poses, the same as the tool frame poses in this study. Then, the ith measured end-effector pose is given by: gsci=gcsi1=exp(p^ci). The kinematic calibration produces the optimal robot parameters p^rR6n+6 from the tool poses obtained from the above camera pose estimation, such as
(14)
where gsti is the estimated end-effector pose for ith measurement. The error between the measured and estimated poses is given by transformation matrix exp(p^ci)gsti1. This homogeneous matrix in SE(3) maps to the corresponding matrix in se(3) by the logarithmic function [38], and Eq. (14) is the summation of its vector representation converted by Eq. (2) for m poses. This optimization problem minimizes the errors by changing the kinematic parameters. By differentiating (4), the relationship between a change in the end-effector’s pose δer and the kinematic parameters δpr is obtained as
(15)
where JrR6×(6n+6) denotes the robot identification Jacobian matrix for kinematic calibration [15], as shown in Fig. 2(b). Jr is given by
(16)
where Apri and Adgsi denote the derivative of exp(p^ri) [38] and the adjoint matrix for the transformation matrix from the base frame to the ith link gsi, respectively. Let δeri be the error of the ith pose in the cost function of Eq. (14), in the same manner as in the PnP problem. The following iterative update of pr reaches the optima of Eq. (14)
(17)
where δpr is given by
(18)
where δe~ represents the concatenated vector of δeri, J~r represents the stacked matrix of the robot identification Jacobian matrices over m poses, and J~r+ represents its pseudoinverse. As a result, we can determine the effect of the errors in end-effector poses on the calibrated kinematic parameters from Eq. (18). Furthermore, we can determine the effect of the detected marker point errors on the calibrated kinematic parameters by combining Eqs. (13) and (18).

3 Visual-Biased Observability Index

The visual observability indices proposed here are the extensions of the standard observability indices considering the camera identification Jacobian matrix described in the above section. The proposed indices can quantify the sensitivity of the estimation of the kinematic parameters for the change in the point positions in an image plane frame.

3.1 Observability Indices.

Sun and Hollerbach [43] classified and designated five observability indices ranging from O1 to O5. These indices represent the robustness of the kinematic parameters with respect to the errors in measured end-effector poses. Let the non-zero singular values of the robot identification Jacobian matrix J~r be σ1, σ2, …, σs. The subscript on singular values indexes them in the descending order. Therefore, σ1 is the largest singular value, and σs is the smallest value. Borm and Meng [26] proposed
(19)
as an observability index corresponding to the volume of the confidence hyper-ellipsoid. Driels and Pathre [25] proposed the inverse of the condition number
(20)
Nahvi et al. [44] proposed the minimum singular value
(21)
as an observability index. Nahvi and Hollerbach [28] also proposed the square of the minimum singular value divided by the maximum singular value
(22)
Finally, Sun and Hollerbach [43] proposed
(23)
analogous to A-optimality in the design of experiments. The maximization of this observability indices in measurement pose selection results in accurate kinematic calibration within the uncertainty of end-effector pose measurement. Sun and Hollerbach [43] discussed the physical differences in these indices from the viewpoint of the optimal design of experiments. They concluded that O1 minimizes the variance between the calibrated parameters regardless of the difference in the scale between the parameters, and O3 minimizes the positioning uncertainty after calibration. Other measures, O2, O4, and O5, are also correlated to each other as its upper and lower bounds, including O1 and O3, but it is difficult to determine the statistical advantage O2, O4, and O5. Furthermore, there is difficulty in the scaling of the parameters in the POE formula, and O1 is the most suitable measure to calibrate the robot with using the POE formula. Therefore, our attempt to extend the measure for robust camera-based robot calibration includes O1 as a target. Additionally, in the experiment section of this paper, our proposed measure is evaluated by measuring the positioning accuracy of the end-effector using the ball-bar test. O3 may show higher accuracy because of its advantage in reducing uncertainty in end-effector positioning. Therefore, in the following sections, this study also includes O3 as a target to compare with the new measure.

3.2 Extension of Observability Indices.

The above observability indices assume the variance in end-effector poses caused by their direct measurements. In image-based kinematic calibration, the variance in image point positions affects the end-effector pose and deforms the shape of the end-effector pose variance. A more robust measurement pose selection for camera-based kinematic calibration is possible by using biased observability indices for the variance in the image plane.

From the transformation matrix of the camera frame to the world frame gcs in Eq. (8), the transformation matrix from the world frame to the camera frame is gsc=gcs1=exp(p^c). As defined above, the camera frame coincides with the tool frame. Therefore, the inverse of Eqs. (13) and (15) provide the projection as follows:
(24)
where Jrc=J~cJr. This equation represents the change in the kinematic parameters caused by the error in the marker points when there is error in the end-effector pose. Therefore, we can determine the effect of the error in the PnP solution in Eq. (12) on the kinematic calibration for single pose measurement from this equation.

Let the new non-zero singular values of the matrix J~rc, which is the stacked Jrc for m poses, be σ1,σ2,,σs in descending order. These singular values of the integrated Jacobian matrix define a new observability index by replacing the singular values of J~r in O1 represented by Eq. (19) with these values. We call it the visual-biased observability index Ov1. It is the index of the effect of variance in kinematic parameters on the variance of points in the image plane. The poses to measure marker points by the camera decided to maximize this index ideally becomes robust for the uncertainty of marker points detection. Furthermore, here, we also define Ov3 as the extension of O3 in the same manner as Ov1. As shown by Sun and Hollerbach [43] using statistical analysis, Ov1 is the best scale-free index to reduce the variance of kinematic parameters. However, Ov3 is the best index that reduces the image plane’s position variance and deviation, thus increasing the end-effector positioning accuracy. Therefore, in the following sections, we consider Ov3 as reference, and the main comparison is done between O1, O3, and Ov1.

4 Experiment

In this section, we compare the positioning accuracy of the manipulator after camera-based calibration using poses selected to maximize each observability index, namely, O1, O3, Ov1, and Ov3, to demonstrate that these visual-biased observability indices are more robust in camera-based calibration. The pose selection and calibration follow standard protocol with the latest techniques, as shown in the following sections, to focus on the effectiveness of Ov1 compared with the traditional indices O1 and O3.

4.1 Experimental Setup.

In the experiment, the target manipulator for the kinematic calibration was a six-axis industrial manipulator (VS-060, DENSO WAVE Inc., Aichi, Japan). The camera attached to the manipulator was an industrial CMOS camera (acA2440-20gc, Basler AG, Schleswig–Holstein, Germany) with a fixed-focus lens (LM8JC10M, Kowa Optronics Co., Ltd., Aichi, Japan), as depicted in Fig. 3. The number of pixels of the camera is 2448 × 2048, and the focal length of the lens is 8.5 mm.

Fig. 3
Manipulator with a camera attached to its end-effector and a calibration board placed in front of it
Fig. 3
Manipulator with a camera attached to its end-effector and a calibration board placed in front of it
Close modal

The marker pattern on the calibration board (CharuCo Target, calib.io ApS, Svendborg, Denmark) used to estimate the camera pose is a ChArUco marker, a combination of ArUco [45] and a checkerboard pattern. The ChArUco marker enables the camera pose estimation, even if some patterns are out of the camera’s FOV. The calibration board size is 200 mm × 150 mm, and the marker width is 6 mm. The pattern has 18 rows and 25 columns, and the maximum number of detectable marker points is 408. We placed the calibration board in front of the manipulator to position its center 0.32 m from the base frame of the manipulator, as depicted in Fig. 3.

We adopted the ball-bar test (QC20 ball-bar, Renishaw plc., England, UK), typically used for testing the positioning accuracy of CNC machines for such evaluations. The ball-bar system measures the error in the radius when the manipulator moves around a center to draw a circle. We placed the center of the ball-bar in the same position as the calibration board center, which is 0.32 m in front of the base frame of the manipulator (Fig. 4(a)), and placed the z-axis positioning stage to adjust the height of the ball-bar test (Fig. 4(b)). The radius for the ball-bar test is 0.1 m.

Fig. 4
Manipulator with ball-bar system for evaluation placed in (a) lower position and (b) higher position
Fig. 4
Manipulator with ball-bar system for evaluation placed in (a) lower position and (b) higher position
Close modal

4.2 Experimental Procedure.

First, we optimized 20 camera poses (m = 20) to capture calibration board images to maximize each of the observability indices O1, O3, Ov1, and Ov3. The optimization was performed using the DETMAX algorithm [46], which is an extensively used pose selection method for kinematic calibration [31]. The DETMAX algorithm optimizes the pose set by adding or removing a pose to minimize the cost function in each iteration. Typically, in the pose addition step, a pose is selected from the predefined pose dataset to decrease the cost function. Instead of preparing a pose dataset, we obtained this additional pose using a derivative-free optimization method called simplified homology global optimization [47]. The DETMAX algorithm is well-balanced between the calculation cost and the global optimality in calibration pose selection, but it still depends on the initial pose set. Therefore, we prepared 100 different initial pose sets, applied the DETMAX algorithm to these pose sets, and chose the results with the maximum observability index.

Considering the camera’s FOV and focus and to reducing the number of optimization parameters, we parameterized the camera pose using a four-vector ρi=[x,y,θ,φ] as the pose directing the calibration board. [x,y] denotes the position of a spherical coordinate center relative to the board center point. θ and ϕ denote the elevation and azimuth angles of the spherical coordinate, respectively. Therefore, the camera pose gc for the frame on the board is given by
(25)
where Sθ, Cθ, Sφ, and Cφ represent sinθ, cosθ, sinφ, and cosφ, respectively, and r represents the radius. The radius r was fixed as 0.3 m based on the focal length of the lens, and other parameters were restricted to −0.07 ≤ x ≤ 0.07, −0.05 ≤ y ≤ 0.05 (meter), −π/12 ≤ θπ/12 and −π ≤ φ < π (radian), respectively, considering the camera’s FOV and the manipulator’s range of motion.

Next, the robot camera captured calibration board images in the optimized poses for each observability index. The manipulator with joint alignments after calibration has no analytical solution, whereas the ideal joint alignments of most commercial robots have an analytical solution of inverse kinematics. Therefore, we solved the inverse kinematics by obtaining a numerical solution after obtaining a rough solution using the analytical solution of VS-060. The optimization process produced 20 poses for each observability index, and the camera obtained 80 images. The camera was calibrated on the basis of 80 whole images using the library opencv for camera calibration using the ChArUco marker. After the camera calibration, the iterative algorithm for solving the PnP problem in Eq. (12) estimated the camera pose for each image. Then, the iterative kinematic calibration as per Eq. (17) estimated the actual kinematic parameters of the manipulator for each set of estimated camera poses obtained by the above pose selection process for each observability index. As a result, four different calibrated kinematic parameter sets were obtained for camera poses selected according to O1, Ov1, O3, and Ov3.

Finally, the ball-bar test evaluated the quality of the kinematic calibration. The manipulator moved its end-effector, attached to the ball-bar end, to trace a circle with the center at the ball-bar axis under 16 different conditions, each with different kinematic parameters. The height of the ball-bar was also varied; one height was same as that of the calibration board during the calibration process and the other was 0.2 m higher. The circular trajectories were obtained by solving the inverse kinematics with the calibrated kinematic parameters obtained on the basis of O1, Ov1, O3, and Ov3. The trajectories were generated in counterclockwise (CCW) and clockwise (CW) directions around the ball-bar center. The circles that the manipulator drew under eight different conditions were compared with the target circles with a 0.1 m radius. The comparison was done with respect to the centers and the radii.

4.3 Result.

Figure 5 shows the camera poses selected through pose optimization using DETMAX. These are the best pose sets with the maximum observability indices selected from the initial 100 pose sets. The poses selected based on the visual-biased observability indices Ov1 and Ov3 tended to be distributed more to the end of the region than the poses obtained on the basis of the original observability indices O1 and O3. In particular, the pose selection with O1 produced the least distributed and most symmetric poses, with some poses being close to each other.

Fig. 5
Best camera pose set selected by optimization for each observability index: (a) O1, (b) Ov1, (c) O3, and (d) Ov3
Fig. 5
Best camera pose set selected by optimization for each observability index: (a) O1, (b) Ov1, (c) O3, and (d) Ov3
Close modal

Table 1 shows the differences in calibrated kinematic parameters from the official nominal values of the manipulator for each index in the pose selection. The values in the table are to be scaled by 103. We can see a slight but noticeable difference between the parameters obtained by calibrating with different calibration poses. It is to be noted that the unit of the first three parameters is radian, and for the others, it is meter. It is not easy to directly compare the six-dimensional vector representation of se(3), but the variation between the calibrated values and the indices was relatively significant in p1, p3, and p4.

Table 1

The differences in the kinematic parameters of the calibrated manipulator from the nominal values (×10−3)

ΔpiO1Ov1O3Ov3
Δp0[7.1042.2465.5835.8096.5524.267][6.8071.8195.6255.5446.6434.082][7.4951.5025.7495.7326.4974.192][5.9771.0846.0465.6576.8354.080]
Δp1[4.3080.2313.1262.5751.0622.538][3.7450.6843.4542.1311.3742.530][3.7570.4733.7463.5440.1382.061][3.1311.2984.2461.2192.2432.882]
Δp2[1.3760.0760.0360.6330.3990.131][1.1860.1550.1470.6060.3370.157][0.1950.6340.4340.9240.6730.260][1.3490.2130.5420.8380.2470.106]
Δp3[1.3210.0420.2820.3612.3352.501][0.4570.1510.2050.6902.3922.591][0.3660.0270.1061.3702.4002.731][1.2910.3880.2050.1192.0372.172]
Δp4[4.4940.0590.4170.9712.4762.360][3.0010.1730.6000.3782.5892.394][2.0230.3550.3680.0372.5232.608][1.5290.5110.6750.6462.2751.934]
Δp5[0.5652.8652.4450.8353.7663.883][0.2873.3112.6161.1713.6713.866][0.0923.9793.2460.1663.8523.767][0.6373.5652.4781.4803.5533.894]
Δp6[2.8351.6394.1951.9081.5156.008][3.2681.3004.6902.0291.3465.920][2.3042.2245.6691.7561.7075.985][3.2181.3134.7901.9721.3885.849]
ΔpiO1Ov1O3Ov3
Δp0[7.1042.2465.5835.8096.5524.267][6.8071.8195.6255.5446.6434.082][7.4951.5025.7495.7326.4974.192][5.9771.0846.0465.6576.8354.080]
Δp1[4.3080.2313.1262.5751.0622.538][3.7450.6843.4542.1311.3742.530][3.7570.4733.7463.5440.1382.061][3.1311.2984.2461.2192.2432.882]
Δp2[1.3760.0760.0360.6330.3990.131][1.1860.1550.1470.6060.3370.157][0.1950.6340.4340.9240.6730.260][1.3490.2130.5420.8380.2470.106]
Δp3[1.3210.0420.2820.3612.3352.501][0.4570.1510.2050.6902.3922.591][0.3660.0270.1061.3702.4002.731][1.2910.3880.2050.1192.0372.172]
Δp4[4.4940.0590.4170.9712.4762.360][3.0010.1730.6000.3782.5892.394][2.0230.3550.3680.0372.5232.608][1.5290.5110.6750.6462.2751.934]
Δp5[0.5652.8652.4450.8353.7663.883][0.2873.3112.6161.1713.6713.866][0.0923.9793.2460.1663.8523.767][0.6373.5652.4781.4803.5533.894]
Δp6[2.8351.6394.1951.9081.5156.008][3.2681.3004.6902.0291.3465.920][2.3042.2245.6691.7561.7075.985][3.2181.3134.7901.9721.3885.849]

Figure 6 shows the resultant deviation from the true circle for each direction (CCW and CW) and height (low and high) as measured during the ball-bar test to evaluate the calibration results. The trajectories in the figure have been enlarged for visibility because the errors were relatively small; the errors were less than 1 mm, whereas the radius of the circular trajectory was 100 mm. Figures 6(a) and 6(b) show the errors obtained from the ball-bar test for the CCW and CW directions, respectively, with the ball-bar placed in the lower position, i.e., at the same height as the base of the manipulator. Figures 6(c) and 6(d) show the errors obtained from the ball-bar test for the CCW and CW directions, respectively, with the ball-bar placed in the higher position, i.e., at the same height as the tool frame when the camera captured the markers during the calibration process. The error in trajectory for the rotational directions, CCW and CW, seems to come from the backlash of the manipulator’s actuator. Another aspect of the results was that the accuracy of the trajectories in the ball-bar test was lower in the higher position, as depicted in the figures. This lower accuracy in the higher position seems to originate from the difficulty in the trajectory; it is closer to the singular point of the manipulator’s joint configuration and requires more significant motion of q2, q3, and q5. The error resulted in an elliptical shape of the graph, which was longer along the y-axis and shorter along the x-axis. This shape indicates that the calibration error aligned the joints to rotate around the y-axis, and q2, q3, and q5 needed to move more to draw a circle in the higher position, resulting in larger error in this direction; the resultant motion was less along the y-axis and more along the x-axis. The marker point errors in the image plane cause more significant translational errors in the x-y plane than in the z-axis direction or rotation. These errors seem to arise because of the effects of the marker point detection uncertainty, and the visual-biased index suppresses this effect. The ball-bar test only shows the positioning errors in a limited region. However, the errors discussed earlier may occur even in other regions. The error will be larger according to increased range of motion.

Fig. 6
Resultant deviation from the true circle as measured during the ball-bar test (enlarged to show only the tip region): (a) deviations in the CCW direction for the lower position, (b) in the CW direction for the lower position, (c) in the CCW direction for the higher position, and (d) in the CW direction for higher position
Fig. 6
Resultant deviation from the true circle as measured during the ball-bar test (enlarged to show only the tip region): (a) deviations in the CCW direction for the lower position, (b) in the CW direction for the lower position, (c) in the CCW direction for the higher position, and (d) in the CW direction for higher position
Close modal

Table 2 shows the means of the absolute errors and the standard deviations for these trajectories with respect to the true circle. In the ball-bar test in the low position, the differences in the accuracies were slight. However, the kinematic parameters obtained from the poses selected based on Ov1 generated the most accurate trajectories among the four indices. Ov3 exhibited high accuracy in the CW direction, but it performed worse than the other three indices in the CCW direction. The difference in accuracy between the observability indices with and without visual bias was more evident in the ball-bar test in the higher position than in the lower position. This might be because the kinematic parameters were calibrated for the estimated poses at the higher position to reduce the error in this region. Although the difference in Ov3 was smaller than that in Ov1, both visual-biased observability indices, Ov1 and Ov3, exhibited higher accuracy than O1 and O3. Notably, Ov1 reduced the error to approximately half of that caused by O1. These experiments showed that the visual-biased observability indices, primarily Ov1, greatly improved the accuracy of kinematic calibration while suppressing the uncertainty caused by pose estimation using a camera.

Table 2

Mean and standard deviation of errors (mm) from the ball-bar radius of 100.0 mm in ball-bar test

HeightDirectionO1Ov1O3Ov3
LowCCW0.0538 ± 0.03210.0419 ± 0.02660.0584 ± 0.03430.0584 ± 0.0369
LowCW0.0501 ± 0.03200.0308 ± 0.02320.0399 ± 0.02320.0334 ± 0.0265
HighCCW0.1064 ± 0.07390.0631 ± 0.04220.1049 ± 0.06660.0896 ± 0.0522
HighCW0.1119 ± 0.07650.0687 ± 0.04410.1138 ± 0.07390.0911 ± 0.0522
HeightDirectionO1Ov1O3Ov3
LowCCW0.0538 ± 0.03210.0419 ± 0.02660.0584 ± 0.03430.0584 ± 0.0369
LowCW0.0501 ± 0.03200.0308 ± 0.02320.0399 ± 0.02320.0334 ± 0.0265
HighCCW0.1064 ± 0.07390.0631 ± 0.04220.1049 ± 0.06660.0896 ± 0.0522
HighCW0.1119 ± 0.07650.0687 ± 0.04410.1138 ± 0.07390.0911 ± 0.0522

5 Conclusion

This study proposed new observability indices considering the bias caused by the pose estimation problem for the pose selection problem in camera-based kinematic calibration. Based on the identification Jacobian matrix obtained by combining the Jacobian matrix translating the points on a camera image to the pose in the PnP problem and the Jacobian matrix translating from the end-effector pose of the manipulator to its kinematic parameters, existing observability indices can be easily extended to visual-biased indices. Here, our method extended O1 from these five indices as Ov1, considering the scale-free property of O1 to suppress the variance of kinematic parameters after calibration. Furthermore, we also adapted O3 to evaluate the new index Ov1, as O3 is the best index to decrease the end-effector positioning uncertainty, and extended it to Ov3 for consistency. In the experiment, the DETMAX algorithm chose the best pose sets to maximize O1, O3, Ov1, and Ov3, and the ball-bar test compared the resultant accuracy in end-effector positioning with the kinematic parameters obtained through calibration based on the marker images captured in these pose sets. The ball-bar tests for different heights demonstrate that kinematic parameters calibrated by the poses selected by Ov1 realized the highest positioning accuracy among the four tested indices, even in regions different from the region of the pose where the camera captured the markers. The positioning accuracy with the kinematic parameters obtained from the selected poses using Ov3 was less than that using Ov1, but Ov3 showed higher accuracy than the non-biased indices O1 and O3 only in the region where the camera captured the markers.

Sun and Hollerbach [43] concluded that O1 was the best observability index to reduce the variance of calibrated parameters because of its property of being scale invariant. The findings of this study also confirmed that the same index was the best choice when extended with visual bias. O3 is the best observability index for reducing the positioning uncertainty of an end-effector, but the accuracy resulting from pose selection with Ov1 exceeded the result with O3 in the experimented region. This suggests that the effect of uncertainty on the position of the image points was dominant in camera-based calibration. From the statistical context provided by Sun and Hollerbach [43], we can state that Ov3 is the best index to reduce the uncertainty of positioning in the image plane. But in end-effector positioning, it did not show superiority over Ov1. However, it may have an advantage as the index for pose selection to increase the positioning accuracy in visual-based robot control. This requires further tests other than the ball-bar test. In conclusion, although other observability indices, including the traditional indices O1 and O3, realized reliable calibration in the experiment, Ov1 was preferable for stable calibration when camera images of the captured marker points calibrated the kinematic parameters.

This work only applied the new index to the standard pose selection method, selecting 20 poses in the predetermined spherical region to maximize the index using the DETMAX algorithm for testing the effect of the visual-biased index. However, the observability index is the universal index for the kinematic parameter calibration of a manipulator, and it is also valid for the pose selection of other calibration approaches. For example, Boby [24] adopted the alternative geometrical approach for camera-based robot calibration using the geometric constraint of each joint’s rotation. He selected the calibration region based on the Jacobian matrix condition number by calculating it for each end-effector pose in this region. Ov1 is naturally introduced into the region selection process with the calculation of Ov1 for the end-effector trajectory when each joint moves. The region selection process using Ov1 is enabled even if the relationship between the marker points and the camera is reversed, as when using a marker placed on the robot’s end-effector as proposed by Balanji et al. [23]. Furthermore, we only focused on offline calibration. However, it is possible to calibrate the parameters online by periodically capturing markers placed around the robot, such as by the online calibration technique with IMU and a position sensor [48]. In these cases, we can select the region where the camera captures the marker to reduce the uncertainty of marker point identification by image processing based on Ov1. These theoretical discussions need to be confirmed by further implementation and experiments, and they are worth testing in our future work.

Furthermore, an actual calibration application requires that the decision on the number of poses required to achieve sufficient calibration accuracy be included in the time cost to take images and the calibration space be included in the limitation of the manipulator workspace to increase calibration utility. These factors have a tradeoff with the calibration performance evaluated by the observability indices. Multi-objective optimization to balance these factors is an option in the practical calibration process. We experimented with the robot performance by calibrating it in a simplified workspace and with a predetermined number of poses. Therefore, validating index Ov1 using multi-objective optimization will be taken up in the future.

Finally, this study only focused on kinematic parameters, such as joint alignment and link length, for daily robot calibration and on realizing high accuracy in quasi-static positioning. In the results, the differences in the positioning accuracy were less evident in the lower position, a region different from the region where the camera captured markers. In the previous section, we discussed the reasons in the context of resultant joint alignment errors. However, the effect of non-kinematic factors is more dominant when there is a change in the height of the end-effector. In this case, the calibrated kinematic parameters in the experiment may optimize the motion in the higher region. The process for calibrating the non-kinematic parameters is basically the same as for kinematic parameters. For example, the modeling considering link mass and gravitational force realizes joint compliance calibration [49]. Modeling non-kinematic parameters, such as compliance and temperature effects on robot geometry, provides the relationship between these parameters and measurement uncertainty. As done during kinematic calibration, we can evaluate the measurement poses using a visual-biased observability index. Accurate non-kinematic parameters result in highly accurate robot motion by considering its dynamics or dynamic temperature changes. Visual-biased observability indices can be used for non-kinematic parameter calibration and their effectiveness verified through experiments in a larger area.

Footnote

Acknowledgment

The authors would like to thank Enago2 for the English language review.

Conflict of Interest

There are no conflicts of interest.

Data Availability Statement

The authors attest that all data for this study are included in the paper.

References

1.
Tutsoy
,
O.
, and
Barkana
,
D. E.
,
2021
, “
Model Free Adaptive Control of the Under-Actuated Robot Manipulator With the Chaotic Dynamics
,”
ISA Trans.
,
118
, pp.
106
115
.
2.
Liu
,
Y.
,
Xu
,
H.
,
Liu
,
D.
, and
Wang
,
L.
,
2022
, “
A Digital Twin-Based Sim-to-Real Transfer for Deep Reinforcement Learning-Enabled Industrial Robot Grasping
,”
Rob. Comput. Integr. Manuf.
,
78
, p.
102365
.
3.
Mooring
,
B. W.
,
Roth
,
Z. S.
, and
Driels
,
M. R.
,
1991
,
Fundamentals of Manipulator Calibration
,
Wiley-Interscience
,
Hoboken, NJ
.
4.
Bernard
,
R.
, and
Albright
,
S.
,
1993
,
Robot Calibration
,
Springer Science & Business Media
,
Berlin, Germany
.
5.
Zhuang
,
H.
, and
Roth
,
Z. S.
,
1996
,
Camera-Aided Robot Calibration
,
CRC Press
,
Boca Raton, FL
.
6.
Elatta
,
A.
,
Gen
,
L. P.
,
Zhi
,
F. L.
,
Daoyuan
,
Y.
, and
Fei
,
L.
,
2004
, “
An Overview of Robot Calibration
,”
Inf. Technol. J.
,
3
(
1
), pp.
74
78
.
7.
Chen
,
I.-M.
,
Yang
,
G.
,
Tan
,
C. T.
, and
Yeo
,
S. H.
,
2001
, “
Local POE Model for Robot Kinematic Calibration
,”
Mech. Mach. Theory
,
36
(
11–12
), pp.
1215
1239
.
8.
Lightcap
,
C.
,
Hamner
,
S.
,
Schmitz
,
T.
, and
Banks
,
S.
,
2008
, “
Improved Positioning Accuracy of the PA10-6CE Robot With Geometric and Flexibility Calibration
,”
IEEE Trans. Rob.
,
24
(
2
), pp.
452
456
.
9.
Nubiola
,
A.
,
Slamani
,
M.
,
Joubair
,
A.
, and
Bonev
,
I. A.
,
2014
, “
Comparison of Two Calibration Methods for a Small Industrial Robot Based on an Optical CMM and a Laser Tracker
,”
Robotica
,
32
(
3
), pp.
447
466
.
10.
Bennett
,
D. J.
, and
Hollerbach
,
J. M.
,
1990
,
Closed-Loop Kinematic Calibration of the Utah-MIT Hand
. pp.
539
552
.
11.
Driels
,
M.
,
1993
, “
Using Passive End-Point Motion Constraints to Calibrate Robot Manipulators
,”
ASME J. Dyn. Sys. Meas. Contr.
,
115
(
3
), pp.
560
566
.
12.
Nubiola
,
A.
,
Slamani
,
M.
, and
Bonev
,
I. A.
,
2013
, “
A New Method for Measuring a Large Set of Poses With a Single Telescoping Ballbar
,”
Precis. Eng.
,
37
(
2
), pp.
451
460
.
13.
Slamani
,
M.
,
Joubair
,
A.
, and
Bonev
,
I. A.
,
2015
, “
A Comparative Evaluation of Three Industrial Robots Using Three Reference Measuring Techniques
,”
Ind. Rob.
,
42
(
6
), pp.
572
585
.
14.
Newman
,
W. S.
,
Birkhimer
,
C. E.
,
Horning
,
R. J.
, and
Wilkey
,
A. T.
,
2000
, “
Calibration of a Motoman P8 Robot Based on Laser Tracking
,”
Proceedings of the IEEE International Conference on Robotics and Automation
,
San Francisco, CA
,
Apr. 24–28
, Vol.
4
,
IEEE
, pp.
3597
3602
.
15.
Chen
,
G.
,
Wang
,
H.
, and
Lin
,
Z.
,
2014
, “
Determination of the Identifiable Parameters in Robot Calibration Based on the POE Formula
,”
IEEE Trans. Rob.
,
30
(
5
), pp.
1066
1077
.
16.
Boby
,
R. A.
, and
Saha
,
S. K.
,
2016
, “
Single Image Based Camera Calibration and Pose Estimation of the End-Effector of a Robot
,”
Proceedings of the IEEE International Conference on Robotics and Automation
,
Stockholm, Sweden
,
May 16–21
,
IEEE
, pp.
2435
2440
.
17.
Filion
,
A.
,
Joubair
,
A.
,
Tahan
,
A. S.
, and
Bonev
,
I. A.
,
2018
, “
Robot Calibration Using a Portable Photogrammetry System
,”
Rob. Comput. Integr. Manuf.
,
49
, pp.
77
87
.
18.
Luo
,
G.
,
Zou
,
L.
,
Wang
,
Z.
,
Lv
,
C.
,
Ou
,
J.
, and
Huang
,
Y.
,
2021
, “
A Novel Kinematic Parameters Calibration Method for Industrial Robot Based on Levenberg-Marquardt and Differential Evolution Hybrid Algorithm
,”
Rob. Comput. Integr. Manuf.
,
71
, p.
102165
.
19.
Motta
,
J. M. S. T.
,
de Carvalho
,
G. C.
, and
McMaster
,
R. S.
,
2001
, “
Robot Calibration Using a 3D Vision-Based Measurement System With a Single Camera
,”
Rob. Comput. Integr. Manuf.
,
17
(
6
), pp.
487
497
.
20.
Meng
,
Y.
, and
Zhuang
,
H.
,
2007
, “
Autonomous Robot Calibration Using Vision Technology
,”
Rob. Comput. Integr. Manuf.
,
23
(
4
), pp.
436
446
.
21.
Du
,
G.
, and
Zhang
,
P.
,
2013
, “
Online Robot Calibration Based on Vision Measurement
,”
Rob. Comput. Integr. Manuf.
,
29
(
6
), pp.
484
492
.
22.
Hayat
,
A. A.
,
Boby
,
R. A.
, and
Saha
,
S. K.
,
2019
, “
A Geometric Approach for Kinematic Identification of an Industrial Robot Using a Monocular Camera
,”
Rob. Comput. Integr. Manuf.
,
57
, pp.
329
346
.
23.
Balanji
,
H. M.
,
Turgut
,
A. E.
, and
Tunc
,
L. T.
,
2022
, “
A Novel Vision-Based Calibration Framework for Industrial Robotic Manipulators
,”
Rob. Comput. Integr. Manuf.
,
73
, p.
102248
.
24.
Boby
,
R. A.
,
2021
, “
Kinematic Identification of Industrial Robot Using End-Effector Mounted Monocular Camera Bypassing Measurement of 3-D Pose
,”
IEEE/ASME Trans. Mechatron.
,
27
(
1
), pp.
383
394
.
25.
Driels
,
M. R.
, and
Pathre
,
U. S.
,
1990
, “
Significance of Observation Strategy on the Design of Robot Calibration Experiments
,”
J. Rob. Syst.
,
7
(
2
), pp.
197
223
.
26.
Borm
,
J.-H.
, and
Meng
,
C.-H.
,
1991
, “
Determination of Optimal Measurement Configurations for Robot Calibration Based on Observability Measure
,”
Int. J. Rob. Res.
,
10
(
1
), pp.
51
63
.
27.
Khalil
,
W.
,
Gautier
,
M.
, and
Enguehard
,
C.
,
1991
, “
Identifiable Parameters and Optimum Configurations for Robots Calibration
,”
Robotica
,
9
(
1
), pp.
63
70
.
28.
Nahvi
,
A.
, and
Hollerbach
,
J. M.
,
1996
, “
The Noise Amplification Index for Optimal Pose Selection in Robot Calibration
,”
Proceedings of the IEEE International Conference on Robotics and Automation
,
Minneapolis, MN
,
Apr. 22–28
, Vol.
1
,
IEEE
, pp.
647
654
.
29.
Chiu
,
Y.-J.
, and
Perng
,
M.-H.
,
2004
, “
Self-Calibration of a General Hexapod Manipulator With Enhanced Precision in 5-DOF Motions
,”
Mech. Mach. Theory
,
39
(
1
), pp.
1
23
.
30.
Takeda
,
Y.
,
Shen
,
G.
, and
Funabashi
,
H.
,
2004
, “
A DBB-Based Kinematic Calibration Method for In-Parallel Actuated Mechanisms Using a Fourier Series
,”
ASME J. Mech. Des.
,
126
(
5
), pp.
856
865
.
31.
Daney
,
D.
,
Papegay
,
Y.
, and
Madeline
,
B.
,
2005
, “
Choosing Measurement Poses for Robot Calibration With the Local Convergence Method and Tabu Search
,”
Int. J. Rob. Res.
,
24
(
6
), pp.
501
518
.
32.
Renaud
,
P.
,
Andreff
,
N.
,
Gogu
,
G.
, and
Dhome
,
M.
,
2003
, “
Optimal Pose Selection for Vision-Based Kinematic Calibration of Parallel Mechanisms
,”
Proceedings of the International Conference on Intelligent Robots and Systems
,
Las Vegas, NV
,
Oct. 27–31
, Vol.
3
,
IEEE
, pp.
2223
2228
.
33.
Hayati
,
S. A.
,
1983
, “
Robot Arm Geometric Link Parameter Estimation
,”
Proceedings of the 22nd IEEE Conference on Decision and Control
,
San Antonio, TX
,
Dec. 14–16
,
IEEE
, pp.
1477
1483
.
34.
Stone
,
H.
, and
Sanderson
,
A.
,
1987
, “
A Prototype Arm Signature Identification System
,”
Proceedings of the IEEE International Conference on Robotics and Automation
,
Raleigh, NC
,
Mar. 31–Apr. 3
, Vol.
4
,
IEEE
, pp.
175
182
.
35.
Mooring
,
B. W.
,
1984
, “
An Improved Method for Identifying the Kinematic Parameters in a Six Axis Robots
,”
Proceedings of the International Computers in Engineering Conference and Exhibit
,
Brighton, UK
, Vol.
1
, pp.
79
84
.
36.
Zhuang
,
H.
,
Roth
,
Z. S.
, and
Hamano
,
F.
,
1990
, “
A Complete and Parametrically Continuous Kinematic Model for Robot Manipulators
,”
Proceedings of the IEEE International Conference on Robotics and Automation
,
Cincinnati, OH
,
May 13–18
,
IEEE
, pp.
92
97
.
37.
Murray
,
R. M.
,
Li
,
Z.
, and
Sastry
,
S. S.
,
1994
,
A Mathematical Introduction to Robotic Manipulation
,
CRC Press
,
Boca Raton, FL
.
38.
Selig
,
J. M.
,
2005
,
Geometric Fundamentals of Robotics
, Vol.
128
,
Springer
,
New York
.
39.
Xiong
,
G.
,
Ding
,
Y.
,
Zhu
,
L.
, and
Su
,
C.-Y.
,
2017
, “
A Product-of-Exponential-Based Robot Calibration Method With Optimal Measurement Configurations
,”
Int. J. Adv. Rob. Syst.
,
14
(
6
), p.
1729881417743555
.
40.
Marchand
,
E.
,
Uchiyama
,
H.
, and
Spindler
,
F.
,
2015
, “
Pose Estimation for Augmented Reality: A Hands-On Survey
,”
IEEE Trans. Vis. Comput. Graph.
,
22
(
12
), pp.
2633
2651
.
41.
Marchand
,
É
, and
Chaumette
,
F.
,
2002
, “
Virtual Visual Servoing: A Framework for Real-Time Augmented Reality
,”
Comput. Graph. Forum
,
21
, pp.
289
297
.
42.
Chaumette
,
F.
, and
Hutchinson
,
S.
,
2006
, “
Visual Servo Control. I. Basic Approaches
,”
IEEE Rob. Autom. Mag.
,
13
(
4
), pp.
82
90
.
43.
Sun
,
Y.
, and
Hollerbach
,
J. M.
,
2008
, “
Observability Index Selection for Robot Calibration
,”
Proceedings of the IEEE International Conference on Robotics and Automation
,
Pasadena, CA
,
May 19–23
,
IEEE
, pp.
831
836
.
44.
Nahvi
,
A.
,
Hollerbach
,
J. M.
, and
Hayward
,
V.
,
1994
, “
Calibration of a Parallel Robot Using Multiple Kinematic Closed Loops
,”
Proceedings of the IEEE International Conference on Robotics and Automation
,
San Diego, CA
,
May 8–13
,
IEEE
, pp.
407
412
.
45.
Garrido-Jurado
,
S.
,
Muñoz-Salinas
,
R.
,
Madrid-Cuevas
,
F.
, and
Marín-Jiménez
,
M.
,
2014
, “
Automatic Generation and Detection of Highly Reliable Fiducial Markers Under Occlusion
,”
Pattern Recogn.
,
47
(
6
), pp.
2280
2292
.
46.
Mitchell
,
T. J.
,
2000
, “
An Algorithm for the Construction of “D-Optimal” Experimental Designs
,”
Technometrics
,
42
(
1
), pp.
48
54
.
47.
Endres
,
S. C.
,
Sandrock
,
C.
, and
Focke
,
W. W.
,
2018
, “
A Simplicial Homology Algorithm for Lipschitz Optimisation
,”
J. Glob. Optim.
,
72
(
2
), pp.
181
217
.
48.
Du
,
G.
,
Zhang
,
P.
, and
Li
,
D.
,
2015
, “
Online Robot Calibration Based on Hybrid Sensors Using Kalman Filters
,”
Rob. Comput. Integr. Manuf.
,
31
, pp.
91
100
.
49.
Tao
,
P.
,
Yang
,
G.
,
Sun
,
Y.
,
Tomizuka
,
M.
, and
Lai
,
C. Y.
,
2012
, “
Product-of-Exponential (POE) Model for Kinematic Calibration of Robots With Joint Compliance
,”
Proceedings of the IEEE/ASME International Conference on Advanced Intelligent Mechatronics
,
Kaohsiung, Taiwan
,
July 11–14
,
IEEE
, pp.
496
501
.