Journal on Multimodal User Interfaces

Papers
(The TQCC of Journal on Multimodal User Interfaces is 7. The table below lists those papers that are above that threshold based on CrossRef citation counts [max. 250 papers]. The publications cover those that have been published in the past four years, i.e., from 2020-03-01 to 2024-03-01.)
ArticleCitations
“Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design47
The effects of spatial auditory and visual cues on mixed reality remote collaboration35
The combination of visual communication cues in mixed reality remote collaboration34
Multimodal interfaces and communication cues for remote collaboration27
A survey of challenges and methods for Quality of Experience assessment of interactive VR applications23
Psychophysical comparison of the auditory and tactile perception: a survey22
Effects of personality traits on user trust in human–machine collaborations22
A BCI video game using neurofeedback improves the attention of children with autism22
Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration21
Sharing gaze rays for visual target identification tasks in collaborative augmented reality16
A gaze-based interactive system to explore artwork imagery15
Words of encouragement: how praise delivered by a social robot changes children’s mindset for learning14
MUMBAI: multi-person, multimodal board game affect and interaction analysis dataset13
Neighborhood based decision theoretic rough set under dynamic granulation for BCI motor imagery classification12
Internet-based tailored virtual human health intervention to promote colorectal cancer screening: design guidelines from two user studies11
Non-native speaker perception of Intelligent Virtual Agents in two languages: the impact of amount and type of grammatical mistakes11
Comparing mind perception in strategic exchanges: human-agent negotiation, dictator and ultimatum games10
Developing a scenario-based video game generation framework for computer and virtual reality environments: a comparative usability study10
Verbal empathy and explanation to encourage behaviour change intention9
fNIRS-based classification of mind-wandering with personalized window selection for multimodal learning interfaces9
Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks9
Interactive sonification strategies for the motion and emotion of dance performances8
An audiovisual interface-based drumming system for multimodal human–robot interaction8
Training public speaking with virtual social interactions: effectiveness of real-time feedback and delayed feedback8
Multimodal analysis of personality traits on videos of self-presentation and induced behavior8
SoundSight: a mobile sensory substitution device that sonifies colour, distance, and temperature7
Combining haptics and inertial motion capture to enhance remote control of a dual-arm robot7
Movement sonification expectancy model: leveraging musical expectancy theory to create movement-altering sonifications7
Speech and web-based technology to enhance education for pupils with visual impairment7
Grounding behaviours with conversational interfaces: effects of embodiment and failures7
Circus in Motion: a multimodal exergame supporting vestibular therapy for children with autism7
0.048572063446045