| Peer-Reviewed

ANFIS-Based Visual Pose Estimation of Uncertain Robotic Arm Using Two Uncalibrated Cameras

Received: 2 January 2018     Accepted: 17 January 2018     Published: 6 February 2018
Views:       Downloads:
Abstract

This paper describes a new approach for the visual pose estimation of an uncertain robotic manipulator using ANFIS (Artificial Neuro-Fuzzy Inference System) and two uncalibrated cameras. The main emphasis of this work is on the ability to estimate the positioning accuracy and repeatability of a low-cost robotic arm with unknown parameters under uncalibrated vision system. The vision system is composed of two cameras; installed on the top and on the lateral side of the robot, respectively. These two cameras need no calibration; thus, they can be installed in any position and orientation with just the condition that the end-effector of the robot must remain always visible. A red-colored feature point is fixed on the end of the third robotic arm link. In this study, captured image data via two fixed-cameras vision system are used as the sensor feedback for the position tracking of an uncertain robotic arm. LabVolt R5150 manipulator in our laboratory is used as case study. The visual estimation system is trained using ANFIS with subtractive clustering method in MATLAB. In MATLAB, the robot, feature point and cameras are simulated as physical behaviors. To get the required data for ANFIS, the manipulator was maneuvered within its workspace using forward kinematics and the feature point image coordinates were acquired with the two cameras. Simulation experiments show that the location of the robotic arm can be trained in ANFIS using two uncalibrated cameras; and problems for computational complexity and calibration requirement of multi-view geometry can be eliminated. Observing Mean Square Error (MSE), Root Mean Square Error (RMSE), Error Mean and Standard Deviation Errors, the performance of the proposed approach is efficient for using as visual feedback in uncertain robotic manipulator. Further, the proposed approach using ANFIS and uncalibrated vision system has better in flexibility, user-friendly manner and computational concepts over conventional techniques.

Published in International Journal of Wireless Communications and Mobile Computing (Volume 6, Issue 1)
DOI 10.11648/j.wcmc.20180601.13
Page(s) 20-30
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2018. Published by Science Publishing Group

Keywords

ANFIS, Forward Kinematics, Two Uncalibrated Cameras, LabVolt R5150 Robot

References
[1] Jang, J. –S. R.; Sun, C. –T. and E. Mizutani. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice Hall. 1997
[2] Peter Corke, Robotics, Vision and Control. Bruno S., Oussama K., Frans G., Ed. Belin, Germany. Spring Verlag. 2011.
[3] J J. Denavit, R. S. Hartenberg, “A kinematics Notation For Lower-Pair Mechanisms Based On atrices”, ASME Journal of Applied Mechanics, vol. 22, pp. 215–221, 1955.
[4] C.-H. Huang, C.-S. Hsu, P.-C. Tsai, R.-J. Wang, and W.-J. Wang, “Vision Based 3-D Position Control for a Robot Arm,” IEEE Control Systems, Nov. 2011, pp. 1699-1703.
[5] E. Zhou, M. Zhu, A. Hong, G. Nejat and B. Benhabib, “Line-Of-Sight Based 3D Localization of Parallel Kinematic Mechanisms,” International Journal on Smart Sensing and Intelligent Systems, vol. 8, no. 2, Jun. 2015, pp. 842-868.
[6] S. Skaar, W. Brockman and W. Jang. “Three-dimensional camera space manipulation,” Int. J. Robotics Research, Apr. 2009, pp. 1172-1183.
[7] H. Hashimoto, T. Kubota, M. Kudou and F. Harashima. “Self-organizing visual servo system based on neural networks,” IEEE Control Systems, Apr. 1992.
[8] Gordon Wells, Christophe Venaille, Carme Torras, “Vision-based robot positioning using neural networks”, Image and Vision Computing 14, Elsevier, 1996, pp. 715-732.
[9] J.-Y. Baek and M.-C. Lee. “A study on detecting elevator entrance door using stereo vision in multi floor environment.” in Proc. ICROS-SICE Int. Joint Conf., Fukuoka, Japan, Aug. 2009, pp. 1370–1373.
[10] K. Okada, M. Kojima, S. Tokutsu, T. Maki, Y. Mori, and M. Inaba. “Multi-cue 3D object recognition in knowledge-based vision-guided humanoid robot system,” in Proc. 2007 IEEE/RSJ Int. Conf. Intell. Robot. Syst., CA, USA, Oct. 2007, pp. 3217–3222.
[11] S. Winkelbach, S. Molkenstruck, and F. M. Wahl. “Low-cost laser range scanner and fast surface registration approach,” in Proc. DAGM Symp. Pattern Recognit., Berlin, Germany, Sep. 2006.
[12] T. Dallej, M. Gouttefarde, N. Andreff, M. Michelin, and P. Martinet, “Towards vision-based control of cable-driven parallel robots”, IEEE International Conference Intelligent Robots and Systems, IROS’11., Sep. 2011, San Francisco, United States. sur CD ROM, pp. 2855-2860.
[13] C. Torras. “Neural learning for robot control,” Proc. 11th Euro. Conf. on Artificial Intelligence (ECAI ’94), Amsterdam, Netherlands, August 1994, pp. 814-819.
[14] J. Cid and F. Reyes, “Visual Servoing Controller for Robot Manipulators,” 4th WSEAS International Conference on Mathematical Biology and Ecology (MABE'08), Acapulco, Mexico, Jan. 2008, pp. 25-27.
Cite This Article
  • APA Style

    Aung Myat San, Wut Yi Win, Saint Saint Pyone. (2018). ANFIS-Based Visual Pose Estimation of Uncertain Robotic Arm Using Two Uncalibrated Cameras. International Journal of Wireless Communications and Mobile Computing, 6(1), 20-30. https://doi.org/10.11648/j.wcmc.20180601.13

    Copy | Download

    ACS Style

    Aung Myat San; Wut Yi Win; Saint Saint Pyone. ANFIS-Based Visual Pose Estimation of Uncertain Robotic Arm Using Two Uncalibrated Cameras. Int. J. Wirel. Commun. Mobile Comput. 2018, 6(1), 20-30. doi: 10.11648/j.wcmc.20180601.13

    Copy | Download

    AMA Style

    Aung Myat San, Wut Yi Win, Saint Saint Pyone. ANFIS-Based Visual Pose Estimation of Uncertain Robotic Arm Using Two Uncalibrated Cameras. Int J Wirel Commun Mobile Comput. 2018;6(1):20-30. doi: 10.11648/j.wcmc.20180601.13

    Copy | Download

  • @article{10.11648/j.wcmc.20180601.13,
      author = {Aung Myat San and Wut Yi Win and Saint Saint Pyone},
      title = {ANFIS-Based Visual Pose Estimation of Uncertain Robotic Arm Using Two Uncalibrated Cameras},
      journal = {International Journal of Wireless Communications and Mobile Computing},
      volume = {6},
      number = {1},
      pages = {20-30},
      doi = {10.11648/j.wcmc.20180601.13},
      url = {https://doi.org/10.11648/j.wcmc.20180601.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.wcmc.20180601.13},
      abstract = {This paper describes a new approach for the visual pose estimation of an uncertain robotic manipulator using ANFIS (Artificial Neuro-Fuzzy Inference System) and two uncalibrated cameras. The main emphasis of this work is on the ability to estimate the positioning accuracy and repeatability of a low-cost robotic arm with unknown parameters under uncalibrated vision system. The vision system is composed of two cameras; installed on the top and on the lateral side of the robot, respectively. These two cameras need no calibration; thus, they can be installed in any position and orientation with just the condition that the end-effector of the robot must remain always visible. A red-colored feature point is fixed on the end of the third robotic arm link. In this study, captured image data via two fixed-cameras vision system are used as the sensor feedback for the position tracking of an uncertain robotic arm. LabVolt R5150 manipulator in our laboratory is used as case study. The visual estimation system is trained using ANFIS with subtractive clustering method in MATLAB. In MATLAB, the robot, feature point and cameras are simulated as physical behaviors. To get the required data for ANFIS, the manipulator was maneuvered within its workspace using forward kinematics and the feature point image coordinates were acquired with the two cameras. Simulation experiments show that the location of the robotic arm can be trained in ANFIS using two uncalibrated cameras; and problems for computational complexity and calibration requirement of multi-view geometry can be eliminated. Observing Mean Square Error (MSE), Root Mean Square Error (RMSE), Error Mean and Standard Deviation Errors, the performance of the proposed approach is efficient for using as visual feedback in uncertain robotic manipulator. Further, the proposed approach using ANFIS and uncalibrated vision system has better in flexibility, user-friendly manner and computational concepts over conventional techniques.},
     year = {2018}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - ANFIS-Based Visual Pose Estimation of Uncertain Robotic Arm Using Two Uncalibrated Cameras
    AU  - Aung Myat San
    AU  - Wut Yi Win
    AU  - Saint Saint Pyone
    Y1  - 2018/02/06
    PY  - 2018
    N1  - https://doi.org/10.11648/j.wcmc.20180601.13
    DO  - 10.11648/j.wcmc.20180601.13
    T2  - International Journal of Wireless Communications and Mobile Computing
    JF  - International Journal of Wireless Communications and Mobile Computing
    JO  - International Journal of Wireless Communications and Mobile Computing
    SP  - 20
    EP  - 30
    PB  - Science Publishing Group
    SN  - 2330-1015
    UR  - https://doi.org/10.11648/j.wcmc.20180601.13
    AB  - This paper describes a new approach for the visual pose estimation of an uncertain robotic manipulator using ANFIS (Artificial Neuro-Fuzzy Inference System) and two uncalibrated cameras. The main emphasis of this work is on the ability to estimate the positioning accuracy and repeatability of a low-cost robotic arm with unknown parameters under uncalibrated vision system. The vision system is composed of two cameras; installed on the top and on the lateral side of the robot, respectively. These two cameras need no calibration; thus, they can be installed in any position and orientation with just the condition that the end-effector of the robot must remain always visible. A red-colored feature point is fixed on the end of the third robotic arm link. In this study, captured image data via two fixed-cameras vision system are used as the sensor feedback for the position tracking of an uncertain robotic arm. LabVolt R5150 manipulator in our laboratory is used as case study. The visual estimation system is trained using ANFIS with subtractive clustering method in MATLAB. In MATLAB, the robot, feature point and cameras are simulated as physical behaviors. To get the required data for ANFIS, the manipulator was maneuvered within its workspace using forward kinematics and the feature point image coordinates were acquired with the two cameras. Simulation experiments show that the location of the robotic arm can be trained in ANFIS using two uncalibrated cameras; and problems for computational complexity and calibration requirement of multi-view geometry can be eliminated. Observing Mean Square Error (MSE), Root Mean Square Error (RMSE), Error Mean and Standard Deviation Errors, the performance of the proposed approach is efficient for using as visual feedback in uncertain robotic manipulator. Further, the proposed approach using ANFIS and uncalibrated vision system has better in flexibility, user-friendly manner and computational concepts over conventional techniques.
    VL  - 6
    IS  - 1
    ER  - 

    Copy | Download

Author Information
  • Department of Mechatronic Engineering, Mandalay Technological University, Mandalay, Myanmar

  • Department of Mechatronic Engineering, Mandalay Technological University, Mandalay, Myanmar

  • Department of Mechatronic Engineering, Mandalay Technological University, Mandalay, Myanmar

  • Sections