IEEE Robotics and Automation Letters

Papers
(The H4-Index of IEEE Robotics and Automation Letters is 58. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2020-11-01 to 2024-11-01.)
ArticleCitations
FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter369
EGO-Planner: An ESDF-Free Gradient-Based Local Planner for Quadrotors183
DynaSLAM II: Tightly-Coupled Multi-Object Tracking and SLAM165
Multi-Sensor Guided Hand Gesture Recognition for a Teleoperated Robot Using a Recurrent Neural Network155
Towards High-Performance Solid-State-LiDAR-Inertial Odometry and Mapping154
Data-Driven MPC for Quadrotors145
DSEC: A Stereo Event Camera Dataset for Driving Scenarios141
Pixel-Level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments140
BADGR: An Autonomous Self-Supervised Learning-Based Navigation System139
R $^2$ LIVE: A Robust, Real-Time, LiDAR-Inertial-Visual Tightly-Coupled State Estimator and Mapping137
FUEL: Fast UAV Exploration Using Incremental Frontier Structure and Hierarchical Planning121
Improving Multi-Agent Trajectory Prediction Using Traffic States on Interactive Driving Scenarios120
Moving Object Segmentation in 3D LiDAR Data: A Learning-Based Approach Exploiting Sequential Data119
ERASOR: Egocentric Ratio of Pseudo Occupancy-Based Dynamic Object Removal for Static 3D Point Cloud Map Building117
BALM: Bundle Adjustment for Lidar Mapping114
A Unified MPC Framework for Whole-Body Dynamic Locomotion and Manipulation113
Faster-LIO: Lightweight Tightly Coupled Lidar-Inertial Odometry Using Parallel Sparse Incremental Voxels108
KISS-ICP: In Defense of Point-to-Point ICP – Simple, Accurate, and Robust Registration If Done the Right Way105
M2DGR: A Multi-Sensor and Multi-Scenario SLAM Dataset for Ground Robots99
BiTraP: Bi-Directional Pedestrian Trajectory Prediction With Multi-Modal Goal Estimation97
Multi-Class Road User Detection With 3+1D Radar in the View-of-Delft Dataset96
Ground-Aware Monocular 3D Object Detection for Autonomous Driving95
Real-Time Semantic Segmentation With Fast Attention94
V2X-Sim: Multi-Agent Collaborative Perception Dataset and Benchmark for Autonomous Driving93
Unified Multi-Modal Landmark Tracking for Tightly Coupled Lidar-Visual-Inertial Odometry91
Combining Events and Frames Using Recurrent Asynchronous Multimodal Networks for Monocular Depth Prediction89
Autonomy in Physical Human-Robot Interaction: A Brief Survey89
LOCUS: A Multi-Sensor Lidar-Centric Solution for High-Precision Odometry and 3D Mapping in Real-Time88
Recovery RL: Safe Reinforcement Learning With Learned Recovery Zones88
Stepwise Goal-Driven Networks for Trajectory Prediction85
Integrated Task Assignment and Path Planning for Capacitated Multi-Agent Pickup and Delivery84
Vision-Only Robot Navigation in a Neural Radiance World83
Panoptic Nuscenes: A Large-Scale Benchmark for LiDAR Panoptic Segmentation and Tracking81
Real-Time Gait Phase Estimation for Robotic Hip Exoskeleton Control During Multimodal Locomotion80
Direct LiDAR Odometry: Fast Localization With Dense Point Clouds80
Patchwork: Concentric Zone-Based Region-Wise Ground Segmentation With Ground Likelihood Estimation Using a 3D LiDAR Sensor79
Range-Focused Fusion of Camera-IMU-UWB for Accurate and Drift-Reduced Localization78
Elastica: A Compliant Mechanics Environment for Soft Robotic Control77
OverlapTransformer: An Efficient and Yaw-Angle-Invariant Transformer Network for LiDAR-Based Place Recognition76
Autonomous UAV Exploration of Dynamic Environments Via Incremental Sampling and Probabilistic Roadmap76
Message-Aware Graph Attention Networks for Large-Scale Multi-Robot Path Planning76
LaneAF: Robust Multi-Lane Detection With Affinity Fields73
Object-Independent Human-to-Robot Handovers Using Real Time Robotic Vision72
UniFuse: Unidirectional Fusion for 360° Panorama Depth Estimation69
Decentralized Multi-Agent Pursuit Using Deep Reinforcement Learning68
Path Planning With Automatic Seam Extraction Over Point Cloud Models for Robotic Arc Welding65
PRIMAL$_2$: Pathfinding Via Reinforcement and Imitation Multi-Agent Learning - Lifelong65
Air-to-Air Visual Detection of Micro-UAVs: An Experimental Evaluation of Deep Learning63
Performance, Precision, and Payloads: Adaptive Nonlinear MPC for Quadrotors63
DM-VIO: Delayed Marginalization Visual-Inertial Odometry62
Endo-Depth-and-Motion: Reconstruction and Tracking in Endoscopic Videos Using Depth Networks and Photometric Constraints62
Concurrent Training of a Control Policy and a State Estimator for Dynamic and Robust Legged Locomotion61
Liquid-Metal Magnetic Soft Robot With Reprogrammable Magnetization and Stiffness61
DiSCO: Differentiable Scan Context With Orientation60
LAMP 2.0: A Robust Multi-Robot SLAM System for Operation in Challenging Large-Scale Underground Environments60
Intensity-SLAM: Intensity Assisted Localization and Mapping for Large Scale Environment60
Deep Compression for Dense Point Cloud Maps59
Are We Ready for Unmanned Surface Vehicles in Inland Waterways? The USVInland Multisensor Dataset and Benchmark59
Run Your Visual-Inertial Odometry on NVIDIA Jetson: Benchmark Tests on a Micro Aerial Vehicle58
SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition58
Reinforcement Learned Distributed Multi-Robot Navigation With Reciprocal Velocity Obstacle Shaped Rewards58
PVStereo: Pyramid Voting Module for End-to-End Self-Supervised Stereo Matching58
0.057078838348389