基于ROS的自主室内导航SLAM算法仿真外文文献翻译、中英文翻译、外文翻译
基于ROS的自主室内导航SLAM算法仿真外文文献翻译、中英文翻译、外文翻译,基于,ROS,自主,室内,导航,SLAM,算法,仿真,外文,文献,翻译,中英文
基于ROS的自主室内导航SLAM算法仿真
摘要----在这篇文章中,我们正在检查基于SLAM的移动机器人在室内环境中建图和导航的灵活性。它基于机器人操作系统(ROS)框架。模型机器人采用Gazebo软件包制作,在Rviz中模拟。建图过程是通过使用GMapping算法来完成的,GMapping算法是一种开源算法。本文的目的是评估移动机器人模型在未知环境中的建图、定位和导航。
关键词----Gazebo;ROS;Rviz;GMapping;激光扫描;导航;SLAM;机器人模型;软件包。
19
引言
在现代世界,由于机器人出错的概率降低,对机器的需求也在增加。机器人的研究和应用从医疗保健到人工智能。人们的生活中也出现很多机器人他们极大的便利了人们的生活,但是他们是如何工作的,他们真的像人类一样吗?他们真的能够感知外界环境吗?其实并不是,除非给机器人一些感知能力,否则它们无法理解周围的环境。我们可以使用不同的传感器,如激光雷达、RGB-D相机、惯性测量单元(IMU)和声纳来提供传感能力。通过使用传感器和建图算法,机器人可以创建周围环境的地图,并在地图中定位自己。机器人将不断检查环境中发生的动态变化。我们的目标是建立一个室内应用的自主导航平台。在本文中,我们通过测量机器人模型到达目的地所花费的行进时间来检验在ROS(机器人操作系统)中实现的基于SLAM(同时定位和建图)的机器人模型的效率。测试在Rviz创建的虚拟环境中进行。通过在地图中为不同的目的地放置不同的动态障碍物,来测量行进时间。
思路
与机器人一起工作需要很多传感器,每个过程都需要实时处理。为了使用需要每10-50毫秒更新一次的传感器和执行器,我们需要一种能够满足这种要求的的操作系统。而机器人操作系统(ROS)为我们提供了实现这一点的架构。首先ROS是开源的,有许多来自好的研究机构的代码,人们可以很容易地在他们自己的项目中使用和实现。此外,机器人的工程师们早些时候缺乏一个共同的合作和交流平台,这推迟了机器人管家的采用和其他相关的发展。自过去十年以来,机器人创新随着ROS的出现而迅速发展,工程师可以在ROS中构建机器人应用程序和程序。机器人导航是机器人领域大多数研究者关注的一个非常广泛的课题。为了使移动机器人系统能够自主,它必须分析来自不同传感器的数据并执行决策,以便在未知环境中导航。ROS帮助我们解决与移动机器人导航相关的不同问题,并且这些技术不限于特定的机器人,而是可以在机器人领域的不同开发项目中重复使用。
相关工作
在研究论文[1]中,作者使用Gmapping算法和ROS进行定位和导航。Gmapping算法使用激光雷达传感器的激光扫描数据来生成地图。该地图由OpenCV人脸检测和corobot技术持续监控,以识别人并在工作环境中导航。研究论文[2]的作者解释了2个基于ROS、建图和定位的协作机器人。这些机器人是自主移动的,在未知的地区工作。对于这个项目,使用的算法也是SLAM。在这里,机器人的主要任务是捡起三块积木,并以预定的方式排列它们。在ROS平台的支持下,他们为此制造了机器人。在研究论文[3]中,作者创建了机械手的仿真实验,并说明了在短时间内实现机器人控制的方法。使用ROS和Gazebo软件包,他们建立了一个7自由度的取放机器人模型,并设法找到了一种花费更少时间的机器人控制器。一篇研究论文[5]通过仿真比较了3种SLAM算法核心SLAM、Gmapping和Hector SLAM。最佳算法用于在不同地形中测试无人地面小车(UGV),以执行防御任务。通过模拟实验,他们比较了不同算法的表现,并制作了一个执行定位和建图的机器人平台。研究论文[6]的作者利用自动视觉和导航框架构建了一个导航平台,利用ROS,将开源的GMapping捆绑包用于即时定位和地图生成(SLAM)。使用Rviz的这个设置,turtlebot 2可以实现。用Kinect传感器代替激光测距仪,降低了成本。该杂志[9]涉及基于智能手机中传感器的室内导航。智能手机既是测量平台,也是用户界面。杂志[10]的作者实现了一个6自由度姿态估计方法和一个室内视觉障碍者寻路系统。地板平面从三维摄像机的点云中提取,并作为地标节点添加到6自由度SLAM的图形中,以减少误差。滚转、俯仰、偏航、X、Y和Z是6个轴。用户界面是通过声音。期刊[11]解释了为什么室内环境对自主四轴飞行器来说很困难。由于实验是在室内进行的,他们不能使用全球定位系统,他们使用激光测距仪、XSens IMU和激光镜的组合来生成三维地图,并在其中定位。四轴飞行器正在使用SLAM算法导航。在论文[12]中,作者描述了固定路径算法和轮椅的特点,轮椅在模拟技术的帮助下使用该算法。论文[13]的作者解释了Arduino制造的自动导航平台,以及使用ani2c协议与数字罗盘和旋转编码器等组件接口来计算距离。在论文[14]中,作者利用Matlab中的模糊工具箱创建了一个自主移动机器人,并使用该机器人进行路径规划。对机器人执行24条模糊规则。论文[15]的作者使用射频识别超高频无源标签和阅读器创建了室内空间的对象级建图。他们说这种方法被用来以经济有效的方式生成一个大的室内区域地图。
系统
A.ROS
ROS的故事始于2000年代中期,当时斯坦福大学正处于创建支持斯坦福 AI机器人和个人机器人项目的系统的阶段。2007年,位于加州门洛帕克的公司Willow Garage通过提供大量资源参与了该系统的开发,从而为机器人领域制造的灵活动态软件系统的进一步开发做出了贡献这也从无数涉及发展的研究中提供了更多的资源和专业知识。该系统是在BSD许可下开发的,并且渐渐地吸引了更多的专家去使用它。随着时间的推移,它已经成为机器人研究界广泛使用的平台。2013年,ROS的核心开发和维护被转移到开源机器人基金会,并一直运行到今天。目前,ROS由世界各地成千上万的用户使用,从爱好到大规模的工业自动化系统。
机器人操作系统(ROS)是一个免费的开源软件,也是最受欢迎的机器人编程中间件之一。ROS自带消息传递接口、工具、包管理、硬件抽象等。它为机器人应用程序提供不同的库、软件和一些集成工具。ROS是一个提供进程间通信的消息传递接口,因此它通常被称为中间件。ROS提供了许多设施来帮助研究人员开发机器人应用程序。在这项研究工作中,ROS是主要的基础,因为它以主题的形式在不同的节点之间发布消息,并具有分布式参数系统。ROS还提供平台间可操作性、模块化、并发资源处理。ROS通过确保线程不一直读写共享资源,而是仅仅发布和订阅消息,简化了系统的整个过程。ROS还帮助我们创建虚拟环境,生成机器人模型,实现算法,并在虚拟世界中可视化它,而不是在硬件本身中实现整个系统。因此,可以对系统进行相应的改进,最终在硬件上实现时,可以获得更好的效果。现在已经建立了对ROS结构的基本理解,可以呈现自动导航特征的综合描述了。ROS中的自动导航过程是在导航栈中实现的,它需要不同的信息以便对期望的目的地进行正确的计算。
B.Gazebo
Gazebo是一个机器人模拟器。Gazebo使用户能够创建复杂的环境,并提供了在创建的环境中模拟机器人的机会。在Gazebo,用户可以制作机器人的模型,并在三维空间中集成传感器。就环境而言,用户可以创建一个平台,并为其设置障碍。对于机器人模型,用户可以使用URDF文件,并可以给出机器人的链接。通过给出链接,我们可以给出机器人每个部分的运动程度。本研究创建的机器人模型是一个差动驱动机器人,带有两个轮子,激光和一个摄像头。在Gazebo中创建一个示例环境,供机器人相应地移动和建图。在这种环境中,一些对象被随机放置在创建地图的地方,这些对象被视为静态障碍物。
C.SLAM
自主机器人应该能够安全地探索周围环境,而不会与人相撞或撞到物体。同步定位和映射(SLAM)使机器人能够通过了解周围环境的样子(建图)和它相对于周围环境的位置(定位)来完成这项任务。SLAM可以使用不同类型的1D、2D和3D传感器来实现,如声学传感器、激光测距传感器、立体视觉传感器和RGB-D传感器。ROS可以用来实现不同的SLAM算法,比如Gmapping、Hector SLAM、KartoSLAM、Core SLAM、Lago SLAM。ROS中的Gmapping包提供了通过使用激光和里程计数据创建二维地图的工具。SLAM算法通过在该区域执行定位操作来创建未知环境的地图。在未知区域被绘制成地图并且机器人知道其相对于地图的位置后便可以执行路线规划和导航。因此,SLAM算法是实现机器人自动导航的重要组成部分。激光器需要配备一个固定的水平安装的激光测距仪。SLAM也是避免机器人行进中障碍物的重要功能。
KartoSLAM,Hector SLAM,Gmapping算法比其它都要好。从地图精度的角度来看,这些算法具有非常相似的性能,但实际上在概念上是不同的。也就是说,赫克托SLAM是基于EKF的,Gmapping是基于RBPF占用网格建图,KartoSLAM是基于图形建图。对于一个处理能力较低的机器人来说,Gmapping可以表现得很好。ROS中的建图包提供基于激光的SLAM(即使定位和建图),所以ROS节点称为SLAM_Gmapping。
SLAM算法可以包含以下五个重要步骤:
1.数据采集:从摄像机或激光扫描仪等传感器收集测量出的数据。
2.特征提取:独特且可识别的关键点和特征是从数据库中挑选的。
3.特征关联:来自先前测量的关键点和特征与最近的关键点和特征相关联。
4.姿态估计:利用关键点和特征之间的相对过渡以及机器人的位置来估计机器人的新姿态。
5.地图调整:基于新的姿态和等效测量,地图被相应地更新。
D.Rviz
Rviz是一个模拟器,我们可以在其中可视化3D环境中的传感器数据,例如,如果我们给Gazebo中的机器人模型固定一个Kinect,激光扫描值可以在Rviz中可视化。从激光扫描数据,我们就可以建立一个地图用于自动导航。在Rviz中,我们可以使用摄像机图像、激光扫描等方式访问和图形化表示这些值。这些信息可用于构建点云和深度图像。在Rviz坐标中称为框架。我们可以选择许多显示器在Rviz中观看,它们是来自不同传感器的数据。通过点击添加按钮,我们可以在Rviz中显示任何数据。网格显示器将给出地面或参考。激光扫描显示器将给出来自激光扫描仪的显示。激光扫描显示器将是传感器msgs/激光扫描类型。点云显示器将显示程序给出的位置。轴显示器将给出参考点。
实现
用于机器人模型执行导航的环境在Gazebo中创建,并且所创建的机器人模型被导入到环境中。机器人模型由两个轮子组成,两个脚轮便于移动,一个摄像头连接到机器人模型上。然后,Hokuyo激光传感器被添加到机器人,插件也含于Gazebo文件。Hokuyo激光传感器提供激光数据,可用于创建地图。使用Gmapping包,通过添加必要的不同参数,在Rviz中就能创建一个地图。最初,机器人模型被移动到环境的每一个角落,直到使用“teleop_key”包创建了完整的地图,其中机器人使用键盘进行控制的。结果表明,Rviz中最终生成的地图与Gazebo中创建的环境非常相似。对于Rviz中的可视化,选择并添加了必要的主题。该机器人模型中使用的Hokuyo激光传感器以主题“/扫描”的形式发布激光数据,而且是Rviz中激光扫描的主题。以创建地图的类似方式,添加了“/map”主题。生成的地图使用ROS中可用的地图服务器包保存。一旦地图生成并保存,机器人现在就可以合并导航堆栈包了。
非常重要的是要注意,如果不把地图给机器人,它就不能导航。使用amcl的导航堆栈包为机器人在2D环境中移动提供了一个概率定位系统。现在,机器人已经准备好在创建的地图中任何地方导航。机器人的目的地可以使用Rviz中的2D导航目标选项给出,该选项基本上确认了机器人有一个目标。用户必须在地图上点击想要的区域,还应该指出机器人的方向。蓝线是机器人到达目的地必须遵循的实际路径。由于一些参数的原因,机器人可能不会遵循给它的确切路径,但它总是试图通过不断地重新规划路径来遵循它。节点图指示了不同节点正在发布和订阅的不同主题。其中/move_base节点订阅了几个主题,如里程计、速度命令、地图、目标,这些主题为机器人的基础在环境中导航提供了必要的数据。
结果评估
为了评估ROS和基于SLAM的Gmapping和导航的性能,创建了特定的环境。在每个环境中,不同的参数,如SLAM生成的地图代表现实的程度,机器人到达给定目的地所需的时间。此外,动态障碍物被放置在机器人的导航路径上,以测试机器人重新规划路径到另一条路径所需的时间。通过测试算法的几个目的地,当目的地A作为机器人的目标时,SLAM根据之前生成的地图找出最短路径,但是当我们在路径中放置动态障碍物时,激光传感器扫描地图,并通过在地图中添加检测到的障碍物来更新地图。一旦地图更新,SlAM找到到达目的地的下一条最短路径。
结论
为了验证基于ROS和SLAM的SLAM建图和导航的性能。在本项目中,通过驱动机器人穿过在Rviz模拟器中创建的特定环境及其地图。创建地图后且目标点固定后。然后计算机器人到达目的地的时间。通过考虑10次试验,得出平均值。通过改变目的地点,类似的过程得以继续。在某些情况下,也引入一些障碍物,这样机器人就会找到另一条路,并穿过它。以同样的方式创建和测试第二个环境。计算到达目的地所需的时间。
从这项研究中可以观察到,机器人给出了很好的响应时间,并且只需要合理的时间来覆盖从出发点到目的地的距离。随着距离的增加,增加的时间也增加。在地图有障碍物的情况下,机器人会找到最短的路径。如果引入额外的障碍物,机器人将停止并重新计算新路径。
参考外文文献
[1] International Journal of Pure and Applied Mathematics Volume 118 No. 7 2018, 199-205
ROS based Autonomous Indoor Navigation Simulation Using SLAM Algorithm
Rajesh Kannan Megalingam, Chinta Ravi Teja, Sarath Sreekanth, Akhil Raj
Department of Electronics and Communication Engineering, Amrita Vishwa Vidaypeetham, Amritapuri, Kerala, India.
Abstract—In this paper, we are checking the flexibility of a SLAM based mobile robot to map and navigate in an indoor environment. It is based on the Robot Operating System (ROS) framework. The model robot is made using gazebo package and simulated in Rviz. The mapping process is done by using the GMapping algorithm, which is an open source algorithm. The aim of the paper is to evaluate the mapping, localization, and navigation of a mobile robotic model in an unknown environment. Keywords—Gazebo; ROS; Rviz; Gmapping; laser scan; Navigation; SLAM; Robot model; Packages.
I. INTRODUCTION
In the modern world, the need for machines are increasing due to the probability of making mistakes by the robot is less. The research and application of robotics are from healthcare to artificial intelligence. A robot can’t understand the surroundings unless they are given some sensing power. We can use different sensors like LIDAR, RGB-D camera, IMU (inertial measurement units) and sonar to give the sensing power. By using sensors and mapping algorithms a robot can create a map of the surroundings and locate itself inside the map. The robot will be continuously checking the environment for the dynamic changes happening there. Our aim was to build an autonomous navigation platform for indoor application. In this paper, we are checking the efficiency of a SLAM (Simultaneous Localization and Mapping) based robot model implemented in ROS (Robot Operating System) by measuring the travel time taken by the robot model to reach the destination. The test is done in a virtual environment created by Rviz. By placing different dynamic obstacles for different destinations in the map, the travel time is measured.
II. MOTIVATION
Working with the robots need a lot of sensors and every process needs to be handled in real time. To use the sensors and actuators which needs to be updated every 10-50 milliseconds we need a type of operating system that gives this kind of privilege. Robot Operating System (ROS) provides us with the architecture to achieve this. ROS is open source and there are a lot of codes available from good research institutes which one can readily use and implement in their own projects. Further robot’s engineers earlier lacked a common platform for collaboration and communication which delayed the adoption of robotic butlers and other related developments. The robotic innovation has quickly paced up since last decade with the advent of ROS wherein the engineers can build robotic apps and programs. Robot navigation is a very wide topic which most of the researchers are concentrating in the field of robotics. For a mobile robot system to be autonomous, it has to analyze data from different sensors and perform decision making in order to navigate in an unknown environment. ROS helps us in solving different problems related to the navigation of the mobile robot and also the techniques are not restricted to a particular robot but are reusable in different development projects in the field of robotics.
III. RELATED WORKS
In the research paper [1], the Authors use ROS with a gmapping algorithm to localize and navigate. Gmapping algorithm uses laser scan data from the LIDAR sensor to make the map. The map is continuously monitored by OpenCV face detection and corobot to identify human and navigate through the working environment. The authors of research paper [2] explain about 2 cooperative robots which work based on ROS, mapping, and localization. These robots are self-driving and working in unknown areas. For this project also the algorithm used is SLAM. Here the main tasks of the robots are to pick up three block pieces and to arrange them in a predetermined manner. With the help of the ROS, they made robots for this purpose. In the research paper [3], the Authors created a simulation of the manipulator and illustrated the methods to implement robot control in a short time. Using ROS and gazebo package, they build a model of pick and place robot with 7 DOF. They managed to find a robot control which takes less time. A research paper [5] compares 3 SLAM algorithms core SLAM, Gmapping, and Hector SLAM using simulation. The best algorithm is used to test unmanned ground vehicles(UGV) in different terrains for defense missions. Using simulation experiments they compared the performance of different algorithms and made a robotic platform which performs localization and mapping. The authors of the research paper [6], made a navigation platform with the use of automated vision and navigation framework, With the use of ROS, the open source GMapping bundle was used for Simultaneous Localization and Mapping (SLAM). Using this setup with rviz, the turtlebot 2 is implemented. Using a Kinect sensor in place of laser range finder, the cost is reduced. The journal [9], deals with indoor navigation based on sensors that are found in smart phones. The smartphone is used as both a measurement platform and user interface. The Author of the journal [10] implemented a 6-degree of freedom (DOF) pose estimation (PE) method and an indoor wayfinding system for the visually impaired. The floor plane is extracted from the 3-D camera’s point cloud and added as a landmark node into the graph for 6-DOF SLAM to reduce errors. roll, pitch, yaw, X, Y, and Z are the 6 axes. The user interface is through sound. Journal [11] explains why the indoor environment is difficult for an autonomous quadcopter. Since the experiment is done indoor they couldn’t use GPS, they used a combination of a laser range finder, XSens IMU, and laser mirror to make 3-D map and locate itself inside it. The quadcopter is navigating using SLAM algorithm.In paper [12] the authors describe fixed path algorithm and characteristics of the wheelchair which uses this with the help of simulation techniques. The authors of paper [13] explain about an auto navigation platform made in Arduino and the use of ani2c protocol to interface components like adigital compass and a rotation encoder to calculate the distance. In the paper [14], using Fuzzy toolbox in Matlab the authors created an autonomous mobile robot and uses the robot for path planning. 24 fuzzy rules on the robot are carried out. The authors of the paper [15], creates an object level mapping of an indoor space using RFID ultra-high frequency passive tags and readers. they say the method is used to map a large indoor area in a cost-effective manner.
IV. SYSTEM
A. ROS Robotic Operating System (ROS) is a free and open-source and one of the most popular middlewares for robotics programming. ROS comes with message passing interface, tools, package management, hardware abstraction etc. It provides different libraries, packages and several integration tools for the robot applications. ROS is a message passing interface that provides inter-process communication so it is commonly referred as middleware. There are numerous facilities that are provided by ROS which helps researchers to develop robot applications. In this research work, ROS is considered as the main base because it publishes messages in the form of topics in between different nodes and has a distributed parameter system. ROS also provides Interplatform operability, Modularity, Concurrent resource handling. ROS simplifies the whole process of a system by ensuring that the threads aren't actually trying to read and write to shared resources, but are rather just publishing and subscribing to messages. ROS also helps us to create a virtual environment, generate robot model, implement the algorithms and visualize it in the virtual world rather than implementing the whole system in the hardware itself. Therefore, the system can be improved accordingly which provides us a better result when it is finally implemented it in the hardware.
B. Gazebo The gazebo is a robot simulator. Gazebo enables a user to create complex environments and gives the opportunity to simulate the robot in the environment created. In Gazebo the user can make the model of the robot and incorporate sensors in a three-dimensional space. In the case of the environment, the user can create a platform and assign obstacles to that. For the model of the robot, the user can use the URDF file and can give links to the robot. By giving the link we can give the degree of movement for each part of the robot. The robot model which is created for this research is a differential drive robot with two wheels, Laser, and a camera on it as shown in Fig. 1. A sample environment is created in the Gazebo for the robot to move and map accordingly. The sample map is shown in Fig. 2. In this environment, several objects were placed randomly where the map is created along with it the objects as these objects were considered as static obstacles.
C. SLAM Autonomous robots should be capable of safely exploring their surroundings without colliding with people or slamming into objects. Simultaneous localization and mapping (SLAM) enable the robot to achieve this task by knowing how the surroundings look like (mapping) and where it stays with respect to the surrounding (localization). SLAM can be implemented using different types of 1D, 2D and 3D sensors like acoustic sensor, laser range sensor, stereo vision sensor and RGB-D sensor. ROS can be used to implement different SLAM algorithms such as Gmapping, Hector SLAM, KartoSLAM, Core SLAM, Lago SLAM. KartoSLAM, Hector SLAM, and Gmapping are better in the group compared to others. These algorithms have a quite similar performance from map accuracy point of view but are actually conceptually different. That’s, Hector SLAM is EKF based, Gmapping is based on RBPF occupancy grid mapping and KartoSLAM in based on thegraph-based mapping. Gmapping can perform well for a less processing power robot. The mapping package in ROS provides laser-based SLAM (Simultaneous Localization and Mapping), as the ROS node called slam_gmapping.
D. Rviz Rviz is a simulator in which we can visualize the sensor data in the 3D environment, for example, if we fix a Kinect in the robot model in the gazebo, the laser scan value can be visualized in Rviz. From the laser scan data, we can build a map and it can be used for auto navigation. In Rviz we can access and graphically represent the values using camera image, laser scan etc. This information can be used to build the point cloud and depth image. In rviz coordinates are known as frames. We can select many displays to be viewed in Rviz they are data from different sensors. By clicking on the add button we can give any data to be displayed in Rviz. Grid display will give the ground or the reference. Laser scan display will give the display from the laser scanners. Laser scan displays will be of the type sensor msgs/Laser scans. Point cloud display will display the position that is given by the program. Axes display will give the reference point.
V. IMPLEMENTATION
The environment for the robot model to perform the navigation is created in the gazebo and the robot model which was created is imported into the environment. The robot model consists of two wheels, two caster wheels for the ease of movement and a camera is attached to the robot model. Later the Hokuyo Laser is added to the robot and plugins were incorporated into the gazebo files. Hokuyo laser provides laser data which can be used for creating the map. Using the Gmapping packages a map is created in the Rviz by adding the different parameters that are necessary. The Fig. 3, shows the initial generation of the map when launched. Initially, the robot model is moved to every corner of the environment until a full map is created using the “teleop_key” package where the robot is controlled using the keyboard. As shown in the Fig. 4, the final generated map in the Rviz which is very much similar to the created environment in the gazebo. For visualization in Rviz, necessary topics were selected and added. The Hokuyo laser sensor which is used in this robot model publishes the laser data in the form of the topic “/scan” which is selected as a topic of laser scan in rviz. In a similar way for creating the map, “/map” topic is added. The generated map is saved using the map_server package that is available in the ROS. Once the map is generated and saved the robot is now ready for the incorporation of navigation stack packages
It is very important to note that a robot cannot be navigated without feeding the map to it. Navigation stack packages by using amcl were used which provides a probabilistic localization system for a robot to move in a 2D. Now, the robot is ready to navigate anywhere in the created map. The destination for the robot can be given using the 2D nav goal option in the Rviz which basically acknowledges the robot with a Goal. The user has to click on the desired area in the map and should also point out the orientation of the robot that it has to be in. The blue line is the actual path that the robot has to follow to reach the destination. The robot may not follow the exact path that is given to it due to some of the parameters but it always tries to follow it by rerouting itself constantly. The node graph that is shown in the Fig. 5, indicates the different topics that are being published and subscribed to the different nodes. The /move_base node is subscribed to several topics like odometry, velocity commands, map, goal, these topics gives the necessary data for the base of the robot to navigate in the environment.
VI. EVALUATION OF THE RESULTS
In order to evaluate the performance of ROS and slam based Gmapping and navigation, specific environments were created. In each environment, different parameters like how well the SLAM generated maps represent reality, the time it took for the robot to reach the given destination. Also, the dynamic obstacles were placed in the robot's navigation path to
收藏