Abstract
Efficient robot integration can be realized by matching real and virtual robots, and accurate robot models can be generated by kinematic parameter calibration. End-effector pose selection for pose measurement to discover the positioning errors is critical in kinematic parameter calibration. Ideal pose selection maximizes calibration accuracy for a defined measurement uncertainty and optimizes measurement cost and utility. In the design of the pose selection process, observability indices are widely accepted criteria for effective pose selection to evaluate calibration performance. Observability indices represent the effect of uncertainty in the measured end-effector poses on the calibrated parameters. However, unlike expensive direct measurement using laser, low-cost camera-based kinematic calibration estimates the end-effector poses from the marker points in the captured image. The variance of the detected marker positions biases the end-effector poses and, eventually, the calibrated parameters. Therefore, this study proposes extended observability indices for pose selection based on this bias to realize accurate calibration with a low-cost camera. The target observability index is O1, a scale-free, reliable index used in kinematic calibration. Considering the visual bias, we extended it as Ov1. This study evaluated Ov1 by comparing the positioning accuracies calibrated on poses selected by maximizing it, original O1, O3 known as the best criterion to restrain the end-effector positioning uncertainty, and Ov3, which is the extended O3 for consistency. A ball-bar test showed that the poses selected by the index Ov1 exhibited higher positioning accuracy than the other indices.
1 Introduction
Robotic manipulators are being increasingly deployed in the manufacturing industry as their performance, affordability, and safety improvements. In addition, the variable demand inherent in agile manufacturing requires manipulators to work flexibly and accurately on various tasks. For example, in flexible manufacturing, a manipulator needs to determine the appropriate picking pose based on the minor differences among the products. It needs to identify these differences in a simulated environment based on the product model without requiring any manual teaching as required for current automation products. As shown in this example, accurate modeling of manipulators to achieve accurate positioning is the key to realizing flexible manufacturing with robotic manipulators. Standard commercial manipulators exhibit high repeatability in positioning. However, absolute positioning accuracy is lower than repeatability. This lower absolute positioning accuracy is primarily because of errors in the kinematic model caused by the difference in parameters, such as link lengths and joint angle offsets, between individual robots arising from manufacturing tolerances. Furthermore, other internal uncertainties, such as the degradation of a manipulator, and external uncertainties, such as environmental temperature, change these errors over time. In order to deal with internal and external uncertainties, recently, a new model-free approach that adaptively controls the manipulator based on measurement has been developed [1,2]. This approach can ideally deal with the change in model, including non-parametric factors, quickly. However, because of reliability considerations, the traditional approach to modeling the robot and identifying the model parameters through robot calibration is still widely used, especially in the industrial field.
Robot calibration, including kinematic calibration, for parameters like joint angle and link length, and non-kinematic calibration, for parameters like joint compliance and deformation caused by temperature change, has been the subject of robotics research for decades [3–6]. In particular, the positioning error caused by a kinematic error is more significant than that of a non-kinematic error [6], and many methods have been proposed for its calibration. The typical first step in these methods is identifying the error between the target pose and the actual pose of a manipulator’s end-effector by measuring it in various manipulator configurations. There are many methods for measuring this error. The first commonly applied approach uses the physical geometrical constraints generated by the closed loop between the end-effector and the reference frame. A coordinate measuring machine [7–9] and a ball-bar [10–13] are the major instruments used for measurement in this approach. This approach enables simple and reliable measurements but requires space to fix the instruments. It also makes it difficult to measure various end-effector poses without frequently relocating the measuring instruments. Another measurement approach is the direct measurement of the end-effector pose. A laser tracker is the most popular and reliable device used in this approach [14,15,9,16–18] as it can measure arbitrary end-effector poses with high precision. However, it is expensive and requires a large area for installation, making periodic measurements for daily or monthly maintenance in a manufacturing system difficult.
Kinematic calibration using a monocular camera and a marker placed around the manipulator is an attractive alternative method for the latter approach without the installation of costly measurement devices [19–21,16,22]. In these studies, the authors realized kinematic calibration using a monocular camera attached to the end-effector to estimate its pose. The camera captures a checkerboard pattern of known dimensions and estimates the camera pose by stereo-matching the corner points. Balanji et al. [23] proposed a unique calibration method in which a monocular camera fixed in the environment captures a fiducial marker on a three-dimensional structure attached to the end-effector to estimate its pose. Recently, Boby [24] proposed another potent approach that calibrates kinematic parameters directly from the errors between the points on the marker without end-effector pose estimation. One of the fundamental problems of camera-based kinematic calibration is the lower precision in measurements as compared to those of conventional physical constraint-based devices and laser trackers due to unavoidable factors in image processing, such as noise and lens distortion. Moreover, the narrower measurement range because of the camera’s limited field of view (FOV) is also a fundamental limitation compared with calibration using a laser tracker.
Therefore, pose selection before calibration to enable the capture of the marker by a camera is vital to the success of the calibration. The poses must be selected by considering the time required for capturing, space limitation, and especially, their performance. The calibration accuracy, subject to the limitation of the camera accuracy, is the measure of performance of this method. Such robot pose selection has been a major problem not only for camera-based calibration but also for calibration with a laser tracker. Classic calibration studies have proposed several observability indices between the end-effector and the space of robot kinematics for pose selection [25–28] because the suppression of the effects of measurement errors on parameter calibration is the same as the problem of improving observability from the parameter space to the measurement space. Furthermore, several methods for obtaining measurement poses to maximize the corresponding indices have been proposed [26,29–31]. In camera-based kinematic calibration, Renaud et al. [32] selected measurement poses for the camera-based calibration of a parallel robot by minimizing the observability indices called O2 [25] and O4 [28] based on the pre-measured statistical properties of noise in the camera system. Filion et al. [17] used the observability index called O1 [26] to select robot poses from a dataset of poses measured by their camera-based system and a laser tracker. The method of pre-measuring noise is model-free and versatile, but it requires time-consuming measurements to acquire sufficient noise data for many robot poses. Furthermore, classical observability indices, such as O1, O2, and O4, assume isotropic uncertainty for the end-effector pose. However, the end-effector pose estimation process with a camera and marker points biases the uncertainty because the points shift on the image plane by an amount equal to the uncertainty of image processing. Therefore, there can be more appropriate measurement poses for camera-based calibration that consider this bias. Boby [24] determined a robot’s trajectory in camera-based calibration by identifying the region where the condition number of their identification Jacobian between the error in image points and robot kinematics becomes small. His approach can be regarded as the resultant region was considered the visual bias as a condition number of a single pose. However, the evaluation of all measurement points using the observability index may lead to a more robust camera-based calibration.
This study aims to realize accurate, periodical, camera-based kinematic calibration, such as the daily kinematic parameter tuning to compensate for degradation or temperature change. For this purpose, this study extends the existing observability indices for camera-based calibration by considering the error propagation from the image space to the space of robot kinematics, with the camera identification Jacobian matrix used for the visual servo. Furthermore, this study addresses O1 as the target measure among the five measures, from O1 to O5, based on the report on the best criteria for reducing the variance of kinematic parameters with invariance in scaling. The new extended observability index, named Ov1, realizes pose selection without the error pre-measurement required for the camera-based measurement system. We apply the extended observability index Ov1 to the traditional pose selection procedure and compare the calibration results with those of the poses obtained based on the original observability index O1 to validate the effectiveness of the extension for pose selection for camera-based calibration. Furthermore, this study also contrasted the results using another index O3, the effective index for reducing the variance of end-effector positioning, to evaluate Ov1.
The remainder of this paper is as follows: Sec. 2 describes the camera-based kinematic calibration method assumed in this study before introducing the observability indices. Section 3 proposes extended observability indices by considering the visual bias based on the relationship between the change in the image points and the kinematic parameters. Finally, in Sec. 4, experiments are conducted to verify the effectiveness of the proposed indices, and Sec. 5 concludes the study.
2 Camera-Based Kinematic Calibration
Before introducing extended observability indices for robot pose selection, this section introduces camera-based kinematic calibration using the product of exponentials (POE) we assumed in this study, along with the parameter definitions. Section 2.1 introduces the kinematics required for camera-based kinematic calibration. It consists of kinematics between the local frames of robot links, the tool coordinate frame, the camera image frame, and the world coordinates. Section 2.2 introduces the camera-based kinematics calibration based on the above kinematics.
2.1 Robot and Camera Kinematics.
Kinematics representation using the Denavit–Hartenberg parameters is still the most popular notation in robot geometry. However, this representation has a singularity problem in the context of parameter identification, differing from robot kinematics analysis: when adjacent joint axes are close to parallel, the common normal line is not uniquely defined, and the parameter values become undefined. Although many parameterization methods have been proposed to avoid instability in kinematic calibration [33–36], robot kinematics defined by POE formulas can naturally introduce the derivative of a robot’s joint poses for a given robot pose using the relationship between the Lie group and Lie algebra [37,38]. Because of the smoothness in the parameter update for end-effector positioning errors, many recent robot calibration studies have used POE formulas [7,15,39,23]. We represent the robot kinematics by the local POE formula [15] in the following discussion.
2.2 Kinematic Calibration From Camera Images.

Relationship between change in parameters: (a) stacked camera identification Jacobian matrix and (b) robot identification Jacobian matrix
3 Visual-Biased Observability Index
The visual observability indices proposed here are the extensions of the standard observability indices considering the camera identification Jacobian matrix described in the above section. The proposed indices can quantify the sensitivity of the estimation of the kinematic parameters for the change in the point positions in an image plane frame.
3.1 Observability Indices.
3.2 Extension of Observability Indices.
The above observability indices assume the variance in end-effector poses caused by their direct measurements. In image-based kinematic calibration, the variance in image point positions affects the end-effector pose and deforms the shape of the end-effector pose variance. A more robust measurement pose selection for camera-based kinematic calibration is possible by using biased observability indices for the variance in the image plane.
Let the new non-zero singular values of the matrix , which is the stacked for m poses, be in descending order. These singular values of the integrated Jacobian matrix define a new observability index by replacing the singular values of in O1 represented by Eq. (19) with these values. We call it the visual-biased observability index Ov1. It is the index of the effect of variance in kinematic parameters on the variance of points in the image plane. The poses to measure marker points by the camera decided to maximize this index ideally becomes robust for the uncertainty of marker points detection. Furthermore, here, we also define Ov3 as the extension of O3 in the same manner as Ov1. As shown by Sun and Hollerbach [43] using statistical analysis, Ov1 is the best scale-free index to reduce the variance of kinematic parameters. However, Ov3 is the best index that reduces the image plane’s position variance and deviation, thus increasing the end-effector positioning accuracy. Therefore, in the following sections, we consider Ov3 as reference, and the main comparison is done between O1, O3, and Ov1.
4 Experiment
In this section, we compare the positioning accuracy of the manipulator after camera-based calibration using poses selected to maximize each observability index, namely, O1, O3, Ov1, and Ov3, to demonstrate that these visual-biased observability indices are more robust in camera-based calibration. The pose selection and calibration follow standard protocol with the latest techniques, as shown in the following sections, to focus on the effectiveness of Ov1 compared with the traditional indices O1 and O3.
4.1 Experimental Setup.
In the experiment, the target manipulator for the kinematic calibration was a six-axis industrial manipulator (VS-060, DENSO WAVE Inc., Aichi, Japan). The camera attached to the manipulator was an industrial CMOS camera (acA2440-20gc, Basler AG, Schleswig–Holstein, Germany) with a fixed-focus lens (LM8JC10M, Kowa Optronics Co., Ltd., Aichi, Japan), as depicted in Fig. 3. The number of pixels of the camera is 2448 × 2048, and the focal length of the lens is 8.5 mm.

Manipulator with a camera attached to its end-effector and a calibration board placed in front of it
The marker pattern on the calibration board (CharuCo Target, calib.io ApS, Svendborg, Denmark) used to estimate the camera pose is a ChArUco marker, a combination of ArUco [45] and a checkerboard pattern. The ChArUco marker enables the camera pose estimation, even if some patterns are out of the camera’s FOV. The calibration board size is 200 mm × 150 mm, and the marker width is 6 mm. The pattern has 18 rows and 25 columns, and the maximum number of detectable marker points is 408. We placed the calibration board in front of the manipulator to position its center 0.32 m from the base frame of the manipulator, as depicted in Fig. 3.
We adopted the ball-bar test (QC20 ball-bar, Renishaw plc., England, UK), typically used for testing the positioning accuracy of CNC machines for such evaluations. The ball-bar system measures the error in the radius when the manipulator moves around a center to draw a circle. We placed the center of the ball-bar in the same position as the calibration board center, which is 0.32 m in front of the base frame of the manipulator (Fig. 4(a)), and placed the z-axis positioning stage to adjust the height of the ball-bar test (Fig. 4(b)). The radius for the ball-bar test is 0.1 m.

Manipulator with ball-bar system for evaluation placed in (a) lower position and (b) higher position
4.2 Experimental Procedure.
First, we optimized 20 camera poses (m = 20) to capture calibration board images to maximize each of the observability indices O1, O3, Ov1, and Ov3. The optimization was performed using the DETMAX algorithm [46], which is an extensively used pose selection method for kinematic calibration [31]. The DETMAX algorithm optimizes the pose set by adding or removing a pose to minimize the cost function in each iteration. Typically, in the pose addition step, a pose is selected from the predefined pose dataset to decrease the cost function. Instead of preparing a pose dataset, we obtained this additional pose using a derivative-free optimization method called simplified homology global optimization [47]. The DETMAX algorithm is well-balanced between the calculation cost and the global optimality in calibration pose selection, but it still depends on the initial pose set. Therefore, we prepared 100 different initial pose sets, applied the DETMAX algorithm to these pose sets, and chose the results with the maximum observability index.
Next, the robot camera captured calibration board images in the optimized poses for each observability index. The manipulator with joint alignments after calibration has no analytical solution, whereas the ideal joint alignments of most commercial robots have an analytical solution of inverse kinematics. Therefore, we solved the inverse kinematics by obtaining a numerical solution after obtaining a rough solution using the analytical solution of VS-060. The optimization process produced 20 poses for each observability index, and the camera obtained 80 images. The camera was calibrated on the basis of 80 whole images using the library opencv for camera calibration using the ChArUco marker. After the camera calibration, the iterative algorithm for solving the PnP problem in Eq. (12) estimated the camera pose for each image. Then, the iterative kinematic calibration as per Eq. (17) estimated the actual kinematic parameters of the manipulator for each set of estimated camera poses obtained by the above pose selection process for each observability index. As a result, four different calibrated kinematic parameter sets were obtained for camera poses selected according to O1, Ov1, O3, and Ov3.
Finally, the ball-bar test evaluated the quality of the kinematic calibration. The manipulator moved its end-effector, attached to the ball-bar end, to trace a circle with the center at the ball-bar axis under 16 different conditions, each with different kinematic parameters. The height of the ball-bar was also varied; one height was same as that of the calibration board during the calibration process and the other was 0.2 m higher. The circular trajectories were obtained by solving the inverse kinematics with the calibrated kinematic parameters obtained on the basis of O1, Ov1, O3, and Ov3. The trajectories were generated in counterclockwise (CCW) and clockwise (CW) directions around the ball-bar center. The circles that the manipulator drew under eight different conditions were compared with the target circles with a 0.1 m radius. The comparison was done with respect to the centers and the radii.
4.3 Result.
Figure 5 shows the camera poses selected through pose optimization using DETMAX. These are the best pose sets with the maximum observability indices selected from the initial 100 pose sets. The poses selected based on the visual-biased observability indices Ov1 and Ov3 tended to be distributed more to the end of the region than the poses obtained on the basis of the original observability indices O1 and O3. In particular, the pose selection with O1 produced the least distributed and most symmetric poses, with some poses being close to each other.

Best camera pose set selected by optimization for each observability index: (a) O1, (b) Ov1, (c) O3, and (d) Ov3
Table 1 shows the differences in calibrated kinematic parameters from the official nominal values of the manipulator for each index in the pose selection. The values in the table are to be scaled by 103. We can see a slight but noticeable difference between the parameters obtained by calibrating with different calibration poses. It is to be noted that the unit of the first three parameters is radian, and for the others, it is meter. It is not easy to directly compare the six-dimensional vector representation of se(3), but the variation between the calibrated values and the indices was relatively significant in , , and .
The differences in the kinematic parameters of the calibrated manipulator from the nominal values (×10−3)
O1 | Ov1 | O3 | Ov3 | |
---|---|---|---|---|
O1 | Ov1 | O3 | Ov3 | |
---|---|---|---|---|
Figure 6 shows the resultant deviation from the true circle for each direction (CCW and CW) and height (low and high) as measured during the ball-bar test to evaluate the calibration results. The trajectories in the figure have been enlarged for visibility because the errors were relatively small; the errors were less than 1 mm, whereas the radius of the circular trajectory was 100 mm. Figures 6(a) and 6(b) show the errors obtained from the ball-bar test for the CCW and CW directions, respectively, with the ball-bar placed in the lower position, i.e., at the same height as the base of the manipulator. Figures 6(c) and 6(d) show the errors obtained from the ball-bar test for the CCW and CW directions, respectively, with the ball-bar placed in the higher position, i.e., at the same height as the tool frame when the camera captured the markers during the calibration process. The error in trajectory for the rotational directions, CCW and CW, seems to come from the backlash of the manipulator’s actuator. Another aspect of the results was that the accuracy of the trajectories in the ball-bar test was lower in the higher position, as depicted in the figures. This lower accuracy in the higher position seems to originate from the difficulty in the trajectory; it is closer to the singular point of the manipulator’s joint configuration and requires more significant motion of q2, q3, and q5. The error resulted in an elliptical shape of the graph, which was longer along the y-axis and shorter along the x-axis. This shape indicates that the calibration error aligned the joints to rotate around the y-axis, and q2, q3, and q5 needed to move more to draw a circle in the higher position, resulting in larger error in this direction; the resultant motion was less along the y-axis and more along the x-axis. The marker point errors in the image plane cause more significant translational errors in the x-y plane than in the z-axis direction or rotation. These errors seem to arise because of the effects of the marker point detection uncertainty, and the visual-biased index suppresses this effect. The ball-bar test only shows the positioning errors in a limited region. However, the errors discussed earlier may occur even in other regions. The error will be larger according to increased range of motion.

Resultant deviation from the true circle as measured during the ball-bar test (enlarged to show only the tip region): (a) deviations in the CCW direction for the lower position, (b) in the CW direction for the lower position, (c) in the CCW direction for the higher position, and (d) in the CW direction for higher position

Resultant deviation from the true circle as measured during the ball-bar test (enlarged to show only the tip region): (a) deviations in the CCW direction for the lower position, (b) in the CW direction for the lower position, (c) in the CCW direction for the higher position, and (d) in the CW direction for higher position
Table 2 shows the means of the absolute errors and the standard deviations for these trajectories with respect to the true circle. In the ball-bar test in the low position, the differences in the accuracies were slight. However, the kinematic parameters obtained from the poses selected based on Ov1 generated the most accurate trajectories among the four indices. Ov3 exhibited high accuracy in the CW direction, but it performed worse than the other three indices in the CCW direction. The difference in accuracy between the observability indices with and without visual bias was more evident in the ball-bar test in the higher position than in the lower position. This might be because the kinematic parameters were calibrated for the estimated poses at the higher position to reduce the error in this region. Although the difference in Ov3 was smaller than that in Ov1, both visual-biased observability indices, Ov1 and Ov3, exhibited higher accuracy than O1 and O3. Notably, Ov1 reduced the error to approximately half of that caused by O1. These experiments showed that the visual-biased observability indices, primarily Ov1, greatly improved the accuracy of kinematic calibration while suppressing the uncertainty caused by pose estimation using a camera.
Mean and standard deviation of errors (mm) from the ball-bar radius of 100.0 mm in ball-bar test
Height | Direction | O1 | Ov1 | O3 | Ov3 |
---|---|---|---|---|---|
Low | CCW | 0.0538 ± 0.0321 | 0.0419 ± 0.0266 | 0.0584 ± 0.0343 | 0.0584 ± 0.0369 |
Low | CW | 0.0501 ± 0.0320 | 0.0308 ± 0.0232 | 0.0399 ± 0.0232 | 0.0334 ± 0.0265 |
High | CCW | 0.1064 ± 0.0739 | 0.0631 ± 0.0422 | 0.1049 ± 0.0666 | 0.0896 ± 0.0522 |
High | CW | 0.1119 ± 0.0765 | 0.0687 ± 0.0441 | 0.1138 ± 0.0739 | 0.0911 ± 0.0522 |
Height | Direction | O1 | Ov1 | O3 | Ov3 |
---|---|---|---|---|---|
Low | CCW | 0.0538 ± 0.0321 | 0.0419 ± 0.0266 | 0.0584 ± 0.0343 | 0.0584 ± 0.0369 |
Low | CW | 0.0501 ± 0.0320 | 0.0308 ± 0.0232 | 0.0399 ± 0.0232 | 0.0334 ± 0.0265 |
High | CCW | 0.1064 ± 0.0739 | 0.0631 ± 0.0422 | 0.1049 ± 0.0666 | 0.0896 ± 0.0522 |
High | CW | 0.1119 ± 0.0765 | 0.0687 ± 0.0441 | 0.1138 ± 0.0739 | 0.0911 ± 0.0522 |
5 Conclusion
This study proposed new observability indices considering the bias caused by the pose estimation problem for the pose selection problem in camera-based kinematic calibration. Based on the identification Jacobian matrix obtained by combining the Jacobian matrix translating the points on a camera image to the pose in the PnP problem and the Jacobian matrix translating from the end-effector pose of the manipulator to its kinematic parameters, existing observability indices can be easily extended to visual-biased indices. Here, our method extended O1 from these five indices as Ov1, considering the scale-free property of O1 to suppress the variance of kinematic parameters after calibration. Furthermore, we also adapted O3 to evaluate the new index Ov1, as O3 is the best index to decrease the end-effector positioning uncertainty, and extended it to Ov3 for consistency. In the experiment, the DETMAX algorithm chose the best pose sets to maximize O1, O3, Ov1, and Ov3, and the ball-bar test compared the resultant accuracy in end-effector positioning with the kinematic parameters obtained through calibration based on the marker images captured in these pose sets. The ball-bar tests for different heights demonstrate that kinematic parameters calibrated by the poses selected by Ov1 realized the highest positioning accuracy among the four tested indices, even in regions different from the region of the pose where the camera captured the markers. The positioning accuracy with the kinematic parameters obtained from the selected poses using Ov3 was less than that using Ov1, but Ov3 showed higher accuracy than the non-biased indices O1 and O3 only in the region where the camera captured the markers.
Sun and Hollerbach [43] concluded that O1 was the best observability index to reduce the variance of calibrated parameters because of its property of being scale invariant. The findings of this study also confirmed that the same index was the best choice when extended with visual bias. O3 is the best observability index for reducing the positioning uncertainty of an end-effector, but the accuracy resulting from pose selection with Ov1 exceeded the result with O3 in the experimented region. This suggests that the effect of uncertainty on the position of the image points was dominant in camera-based calibration. From the statistical context provided by Sun and Hollerbach [43], we can state that Ov3 is the best index to reduce the uncertainty of positioning in the image plane. But in end-effector positioning, it did not show superiority over Ov1. However, it may have an advantage as the index for pose selection to increase the positioning accuracy in visual-based robot control. This requires further tests other than the ball-bar test. In conclusion, although other observability indices, including the traditional indices O1 and O3, realized reliable calibration in the experiment, Ov1 was preferable for stable calibration when camera images of the captured marker points calibrated the kinematic parameters.
This work only applied the new index to the standard pose selection method, selecting 20 poses in the predetermined spherical region to maximize the index using the DETMAX algorithm for testing the effect of the visual-biased index. However, the observability index is the universal index for the kinematic parameter calibration of a manipulator, and it is also valid for the pose selection of other calibration approaches. For example, Boby [24] adopted the alternative geometrical approach for camera-based robot calibration using the geometric constraint of each joint’s rotation. He selected the calibration region based on the Jacobian matrix condition number by calculating it for each end-effector pose in this region. Ov1 is naturally introduced into the region selection process with the calculation of Ov1 for the end-effector trajectory when each joint moves. The region selection process using Ov1 is enabled even if the relationship between the marker points and the camera is reversed, as when using a marker placed on the robot’s end-effector as proposed by Balanji et al. [23]. Furthermore, we only focused on offline calibration. However, it is possible to calibrate the parameters online by periodically capturing markers placed around the robot, such as by the online calibration technique with IMU and a position sensor [48]. In these cases, we can select the region where the camera captures the marker to reduce the uncertainty of marker point identification by image processing based on Ov1. These theoretical discussions need to be confirmed by further implementation and experiments, and they are worth testing in our future work.
Furthermore, an actual calibration application requires that the decision on the number of poses required to achieve sufficient calibration accuracy be included in the time cost to take images and the calibration space be included in the limitation of the manipulator workspace to increase calibration utility. These factors have a tradeoff with the calibration performance evaluated by the observability indices. Multi-objective optimization to balance these factors is an option in the practical calibration process. We experimented with the robot performance by calibrating it in a simplified workspace and with a predetermined number of poses. Therefore, validating index Ov1 using multi-objective optimization will be taken up in the future.
Finally, this study only focused on kinematic parameters, such as joint alignment and link length, for daily robot calibration and on realizing high accuracy in quasi-static positioning. In the results, the differences in the positioning accuracy were less evident in the lower position, a region different from the region where the camera captured markers. In the previous section, we discussed the reasons in the context of resultant joint alignment errors. However, the effect of non-kinematic factors is more dominant when there is a change in the height of the end-effector. In this case, the calibrated kinematic parameters in the experiment may optimize the motion in the higher region. The process for calibrating the non-kinematic parameters is basically the same as for kinematic parameters. For example, the modeling considering link mass and gravitational force realizes joint compliance calibration [49]. Modeling non-kinematic parameters, such as compliance and temperature effects on robot geometry, provides the relationship between these parameters and measurement uncertainty. As done during kinematic calibration, we can evaluate the measurement poses using a visual-biased observability index. Accurate non-kinematic parameters result in highly accurate robot motion by considering its dynamics or dynamic temperature changes. Visual-biased observability indices can be used for non-kinematic parameter calibration and their effectiveness verified through experiments in a larger area.
Footnote
Acknowledgment
The authors would like to thank Enago2 for the English language review.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The authors attest that all data for this study are included in the paper.