Journal on Multimodal User Interfaces

Papers
(The median citation count of Journal on Multimodal User Interfaces is 2. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2022-05-01 to 2026-05-01.)
ArticleCitations
Human or robot? Exploring different avatar appearances to increase perceived security in shared automated vehicles52
Gesture-based guidance for navigation in virtual environments32
Race and robots: a critical discussion on discrimination and inclusion in robotics22
An overview study on the use of Semantics in Immersive Environments14
A low duration vibro-tactile representation of Braille characters11
TapCAPTCHA: non-visual CAPTCHA on touchscreens for visually impaired people10
Vis-Assist: computer vision and haptic feedback-based wearable assistive device for visually impaired10
A study on the attention of people with low vision to accessibility guidance signs8
Smart obstacle-detecting accessory for white cane improves safety and efficiency of outdoor walking for the visually impaired: pilot study8
Prediction of pedestrian crossing behaviour at unsignalized intersections using machine learning algorithms: analysis and comparison8
Embodied knowledge as a design probe in a user-centered design study of a sonification system for crochet8
Assessment of comparative evaluation techniques for signing agents: a study with deaf adults8
HaM3D: generalized XR-based multimodal HRI framework with haptic feedback for industry 4.07
What is good? Exploring the applicability of a one item measure as a proxy for measuring acceptance in driver-vehicle interaction studies7
Studying human modality preferences in a human-drone framework for secondary task selection7
Comparing head-mounted and handheld augmented reality for guided assembly7
Correction: Comparing head-mounted and handheld augmented reality for guided assembly6
Three-dimensional sonification as a surgical guidance tool6
Exploring User Interactions with Commercial Machines via Real-world Application Logs in the Lab6
In-vehicle nudging for increased Adaptive Cruise Control use: a field study5
The cognitive basis for virtual reality rehabilitation of upper-extremity motor function after neurotraumas5
Data-driven psychophysical methods to diversify SIAs and address bias5
Ipsilateral and contralateral warnings: effects on decision-making and eye movements in near-collision scenarios5
From personality to argumentative schemes: an analysis of emotional and fallacies schemes for modeling discrimination in SIAs5
Identification of visual stimuli is improved by accompanying auditory stimuli through directing eye movement: an investigation in perceptual-cognitive skills5
In-vehicle air gesture design: impacts of display modality and control orientation4
A SLAM-based augmented reality app for the assessment of spatial short-term memory using visual and auditory stimuli4
SonAir: the design of a sonification of radar data for air traffic control4
Correction to: PepperOSC: enabling interactive sonification of a robot’s expressive movement4
Exploring visual stimuli as a support for novices’ creative engagement with digital musical interfaces4
Multimodal exploration in elementary music classroom3
Commanding a drone through body poses, improving the user experience3
The effects of haptic, visual and olfactory augmentations on food consumed while wearing an extended reality headset3
Pegasos: a framework for the creation of direct mobile coaching feedback systems3
Comparing alternative modalities in the context of multimodal human–robot interaction3
Designing multi-purpose devices to enhance users’ perception of haptics2
Does mixed reality influence joint action? Impact of the mixed reality setup on users’ behavior and spatial interaction2
A social robot as your reading companion: exploring the relationships between gaze patterns and knowledge gains2
Pointing gestures accelerate collaborative problem-solving on tangible user interfaces2
AirWhisper: enhancing virtual reality experience via visual-airflow multimodal feedback2
Perceptually congruent sonification of auditory line charts2
Model-based sonification based on the impulse pattern formulation2
Modelling the “transactive memory system” in multimodal multiparty interactions2
Presentation voice descriptor using generative AI as a learning aid for visually impaired learners2
MODIFF-8 to better motivate: Live adaptive human-socially interactive agent interaction2
0.022407054901123