Autonomous Mobile Robot: GPS and Compass

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Introduction

According to Nourbakhsh and Siegwart mobile robotics is a recent field that has combined technologies from various fields of engineering and science. The essence of mobile robotics is to provide the previously rigid parts of machines with a dexterity rivaling and even exceeding human beings through the complex combination of technologies, such as electrical and electronic engineering, computer engineering and cognitive and social sciences.[1] Nourbahksh and Siegwart go on to outline that Robots have recently found use in various sectors of the industry and are replacing human beings. They provide the example of manipulators or Robots arms that have the capacity of performing complex and repetitive tasks much easier due to their speed and precision. The speed and precision are particularly mandatory characteristics for most industries that deal with the manufacture of complex and small devices such as laptops and mobile phones. [2]

However, as technological advancements took the stage it became observable that there was still a big room for improvement of the robots. These Robots were being controlled from a central position where someone was required to keep a constant look to ensure that the Robots did not overdo certain tasks. For instance, a robot that is programmed to perform spray painting would continue to spray paint even when there were no vehicles to paint unless it was shut down. This limitation was sufficient for technologists to begin thinking of an intelligent Robot that would be fed memory and would perform tasks with the same precision and speed, but with minimum human supervision. Furthermore, it was also realized that to manipulate robots movements it would be imperative to first understand how they move. Human beings do not control Robots but rather use the motions by the Robots to control movements. Nourbakhsh and Siegwart outline that &humans perform localization and cognitive activities, but relies on the robot control scheme to control the robot[3]

Locomotion

A robot is a machine consisting of parts that are immobile on their own. Therefore the question that arises is how a robot achieves the capacity to move freely. Dudek and Jenkin answer this question by outlining that a robot is, a collection of subsystems with the capacity to move, perceive, reason, and communicate. The movement helps the robot to explore its environment, the perception helps the robot to respond to changes within its environment and communication provides and interfaces for the exchange of information between the robot and human beings. [4] Among the locomotion methods that have been studied include the wheeled and the legged locomotion methods. The methods are based on the computation of the motions observed in the surrounding fauna. [5]According to Paul Chandana, the morphology of the robot plays an important role in how easily it navigates its environment and responds to instructions. The morphology plays an important role in several factors such as how the sensory and motor aspects of the robot interrelate, the resulting changes, and the complexity of the control system that will be required. [6]Kim and Shim in their research realized that the use of an algorithm would solve the problem of driving the robot at a particular velocity as well as ensuring its stability based on the evolutionary programming. The study realized that the proposed algorithm could provide stability for the robot as evidenced by computer simulations and based on the Lyapunov theory. [7]

Locomotion also includes the dexterity achieved by the outer parts of the robots that assist not only in motion but also in how the robots manipulate the appendages to accomplish various tasks. Therefore, as the autonomous robot moves forward it will also be able to perform tasks with the arm in the same fashion it accomplishes the motions. Motions help a robot to explore its environment, but there should be a similar automated system that enables the robot to perform various tasks. [8] A method proposed by Xiang et al in optimizing the motion of an autonomous robot by reducing redundancies is known as the General Weight Least Norm Control for Redundant Manipulators. This method ensures that the efficiencies lost at the joints of the robots are greatly reduced. The inefficiencies within the joints emanate due to the inability of the various joints to move in unison and harmony. In the findings, Xian realized that the General Weight Least Norm Control for Redundant Manipulators, using a seven degree of freedom manipulators, improved the general path followed by various parts and reduced significantly the limitations presented by the joints. [9] The creation of harmony in the way the different joints move ensures that motion is optimized and that little energy is lost in the process. The other realization is that in most instances the challenges presented in the motion of the appendages of a particular robot are not only limited to the number of joints but can significantly exceed the number of joints. The advantage of the General Weight least is that it has the capacity of reducing even the additional challenges. Figure 1 shows an autonomous robot arm avoiding a cylindrical obstacle by the manipulation of the movements of joints.

Locomotion
Figure 1 (Adapted from Xian et al).

Mobile Robot Kinematics

Kinematics is concerned with various aspects of velocity as a robot moves including the angular and the linear velocity of the robot. The computation of the linear and the angular velocity as the robot moves helps in determining the most applicable design for a particular environment. [1] According to Fahimi, most commercial mobile robots are based on the Hilare Model where the linear and the angular velocities are computed and resolved in coming up with a general law that guides the production of subsequent robots. [2] Mobile Kinematics constitutes and important aspects for all mobile robots as it determines the degree of stability that a particular design will be able to achieve in a specific environment. Stability even becomes much more of a prerequisite for autonomous mobile robots because they do not require supervision. Various other important aspects are put into consideration and they include the center of gravity for a particular model in resolving the angular and the linear velocity. [3] The essence is always to come up with laws governed by calculations that will be applicable for a particular design. Figure 2 adapted from NASA[4] shows the example of a robot that was used for the exploration of Mars. This autonomous mobile robot was being operated from the earth

Mobile Robot Kinematics
Figure 2.

Perception

Perception is concerned with how the Robots sense changes in the environment and the responses that the robot undertakes. For instance, the vision of the autonomous mobile robot is very important and should therefore be accurate and full of clarity so that the robot can respond according to the memory that has been fed. Louis and Boyer explain that it is important for the robot to be able to use only the existing form of light to process images instead of requiring additional illumination. The most available form of light is generally white light. The algorithm is usually employed to resolve the distance between a particular image and the robot and the algorithm must employ is very sensitive to depth variations. This will have the effect of improving the accuracy of the image. Blur is one particular challenge that designers of mobile autonomous have to deal with. The algorithm is also used in this aspect to estimate the extent of a blur. In most instances, special optics technology is employed to resolve an image observed from different planes. According to Louis and Boyer, the essence is to, find a point spread function that is convolved with the small focal gradient, and image produces a large focal gradient.[1]

The major challenge in autonomous mobile robotics in terms of perceptions has always been to resolve the robots trajectory from the point of origin to a particular destination. In autonomous mobile robots, an additional challenge is presented in how to formulate a sensory strategy that will guide the robot into detecting such aspects as light and analyzing the variations. These challenges require special devices that will guarantee proper detection and response. An example of a design employed to compute the sensory problems is the Amplitude Modulated Continuous Wave (AMCW). This design makes use of a single frequency for reception. [2]Perception of a robot is comparable to the senses of human beings. This implies that after the robot has been fed with the memory of a particular object or situation, the robot will be able to perceive it and respond effectively. In this area, the algorithm method significantly solves the problem when configuring the resolution of distance by the robot. The implication is that accuracy in the determination of distance will be greatly reduced while the extent of vision blur will also be eliminated. This attribute helps the robot to effectively detect obstacles and avoid them, as well as be able to identify a particular target.

Localization and Mapping

According to Chatila Raja, localization and mapping are aspects that should be computed simultaneously so that the performance of the autonomous robot can be maximized. The ability of a robot t autonomously navigate is the essence of autonomy. The robot should have the capacity to construct a spatial representation, make decisions concerning motion, plan the motion and then finally initiate the motion. This challenge is also solved via mathematical laws and the computation of all probabilities. According to Liang et al, tracking of autonomous robots can be almost impossible without the consideration of the kinematics and dynamics because both velocities determine the relative position of the robot. However, the problem with depending on these velocities during localization is that they are subject to interference by noise. Therefore, the most effective method is to establish a method of resolving the location that is not subject to interference by noise. This can emanate ineffectiveness in terms of performance and stability.

Liang et al provide an alternative way of realizing the location. The system introduces a method that does not include measuring dynamics and kinematics. The problem is solved by introducing the sliding observer system in conjunction with the Lyapunov analysis method. The Lyapunov system has so far demonstrated that the system for tracking the robots that are without repercussions and is based on the sliding patch concept. The primary drawback in this perspective is that most autonomous mobile robots have reduced dexterity. The current approach proposes the use of neural networks. [3] Mancha et al propose in this perspective a method that can be used to determine the position of a robot. This method uses camera space manipulation using the linear camera model. Experiments using cameras have determined that this method can reduce the degree of error during positioning significantly. [4] The system uses the basic concepts of cameras in a bid to manipulate space and therefore optimize the process of establishing the position of the robot. [5]Another method proposed by Tahri et al is the decoupling of image-based visual servoing. Figure 3 illustrates the results of this method

Localization and Mapping
Figure 3.

Figure 2 (a) represents a picture before the use of the method while figure 2(b) illustrates the same picture after using the method. This method combines the basics 3-D and making use of invariants to control the translational motions. This ensures that a robot can be effectively located. Locating a robot does not eliminate the fact that it is autonomous but ensures that at any one time the position of a robot with regard to the grid resolution can be determined. 1An example of sliding mode control adapted from Kikuuwe is as in Figure 4

Localization and Mapping
Figure 4 (Adapted from Kikuuwe).

Planning and Navigation

The basis of planning and navigation of autonomous mobile robots is the avoidance of hurdles and being able to manipulate various landscapes and environments effectively. The Global Positioning System (GPS) receiver, sonar detectors, and electronic compass are required for identifying the location of the robot and scanning the landscape in front of it to control the unmanned vehicle and start the avoidance operation in case if obstacles are found on its path.

As outlined by Olunloyo and Ayomoh, various strategies are available. One is to modulate the integration between virtual obstacle concept and virtual goal concept in a method termed as a hybrid virtual force field. In their findings, Olunloyo and Ayomoh established that the hybrid virtual force field methodology was versatile and robust[1]In resolving the challenge of planning the path of unmanned, aerial vehicles, Portas et al outline that the evolutionary algorithm method provides the best solution. The calculations are based on the findings resolved from a GPS receiver. The system uses the concepts within these fields and transfers this concept to unmanned aerial vehicles. Generally, the resolved path of conventional aerial vehicles is arrived at by the consideration of the multiple co-ordinations concerning evolutionary algorithm. [2]

The coordinates of the current location of the robot are to be correlated with the coordinates of the point of destination and desired direction intending to fulfill the control function properly. Being one of the main components of the construction, the GPS receiver and embedded compass provide the robot with the relevant information concerning the actual environment and after processing this data the decisions to turn, to move forward, or to stop can be made. The operation of collision avoidance starts in case if an obstacle is scanned with a sonar detector in the desired direction. The effectiveness of the operation depends upon the accuracy of the information taken from GPS and compass. For this reason, the mechanisms of retrieving the information from each of these components and its processing are of crucial importance.

The essence of Portas method is that it ensures that roles such as identification of target and recognition of path are resolved using an evaluation algorithm. [3] In developing the path, Jaillet et al propose that sampling path planning is an efficient way when generating the space where the vehicle will navigate. [4] An autonomous mobile robot should have an inherent ability to explore different terrains while in the process avoiding obstacles and dangers. This capability enables the autonomous robot to operate with minimal human supervision and this constitutes the essence of autonomy. Furthermore, an autonomous robot should also be able to move from the launching point through various obstacles to arrive at the target. The robot should also be able to identify the target using a given marker that will be recognizable to the robot. Figure 5 shows a robot that is used to inspect air ducts. The camera situated at the front can detect different gradients, walls, and points of the intersection during navigation and respond appropriately. [5]

Planning and Navigation
Figure 5 (Adapted from Sedirep).

Conclusion

Some decades ago the idea of creating an autonomous mobile robot would have sounded seemed far-fetched. A combination of technologies such as the Lyapunov theories, evolution algorithm, 3-D technology, and camera-based concepts has resulted in the realization of this feat. These technologies have made it easier for robots to navigate terrains while avoiding obstacles and locating targets, perceive different objects within various environments, determine their position and provide this information to a control center and optimize the use of appendages. These basic attributes define the very essence of mobile autonomous robots.

Reference List

Adams, D. Sensor Modeling, Design and Data Processing for Autonomous Navigation. World Scientific Publishing Company, New York, 1998, p. 43.

Conrad, H. Application of Evolutionary Algorithm in Autonomous Mobile Robots. Cambridge University Press, Cambridge, 2003, p. 82

Dudek, G. and Jenkin, M. computational Principles of Mobile Robotics. Cambridge University Press, New York, 2000, p.15.

Fahimi, F. Autonomous Robots: Modeling, Path Planning and Control, Volume 740. University of Alberta, Edmonton, 2009, p.163.

Gonclaves et al. Vector Fields for Robot Navigation Along Time Varying Curves in n- Dimension. IEEE Transactions on Robotics Journals 26(4), 2010. Web.

Homeler, R. Manipulation of Autonomous Mobile Robots. Massachusetts Institute of Technology, Massachusetts, 2001, p.48.

Jailett et al. Sampling Based Path Planning on Configuration Space Cost Maps. IEEE Transactions on Robotics Journals 26(4), 2010. Web.

Kikkuuwe, R. Proxy Based Sliding Mode Control: A Safer Extension of PID Position.. IEEE Transactions on Robotics Journals 26(4), 2010. Web.

Kim, H. and Shim, S. Robust Optimal Locomotion Control Using Evolutionary Programming for Autonomous Mobile Robots.2009, Web.

Liang et al. Adaptive Task Space Tracking Control of Robots without Task Space and Joint Space Velocity Measurement. IEEE Transactions on Robotics Journals 26(4), 2010. Web.

Louis, S. and Boyer, L. Applications of AI Machine, Vision and Robotics. World Scientific Publishing Company, New York, 2005, p. 214.

Mancha et al. IEEE Transactions on Robotics Journals 26(4), 2010. Web.

NASA. Mobile Robot Sojourner.1997. Web.

Nourbakhsh, L. and Siegwart, L. Autonomous Mobile Robots. Massachusetts Institute of Technology, Massachusetts, 2005.

Nourbakhsh, R. and Siegwart, R. Introduction to Autonomous Mobile Robots.Massachusetts Institute of Technology, Massachusetts, 2004.

Olunloyo, s. Ayomo, O. Autonomous Mobile Robot Navigation Using Hybrid Virtual Force Field Concept. 2009, Web.

Portas et al. Evolutionary Trajectory Planner for Multiple UAVs in Realistic Scenarios. IEEE Transactions on Robotics Journals 26(4), 2010. Web.

Ralph, M. & Medhat, A. An Integrated System for User Adaptive Robotic Grasping. IEEE Transactions on Robotics Journals 26(4), 2010. Web.

Schaal, S. From Animals to Animats: Proceedings of the Eighth International Conference on the Simulation of Adaptive Behavior. Massachusetts Institute of Technology, Massachusetts, 2004, p.33.

Sedirep. A Robot Featuring Pantilt Camera. Web.

Thomas, L & Andrew, A. The Basic Concepts of Robotics. Massachusetts Institute of Technology, Massachusetts, 2008, p.198.

Xian et al. IEEE Transactions on Robotics. RetrrereJournals 26(4), 2010. Web.

Footnotes

  1. Kikkuuwe, R. Proxy Based Sliding Mode Control: A Safer Extension of PID Position.. IEEE Transactions on Robotics Journals 26(4), 2010. Web.
Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!