Abdennour El Rhalibi

Abdennour El Rhalibi

Liverpool John Moores University, UK

Title: Coarticulation and Speech Synchronization in MPEG-4 Based Facial Animation


Abdennour El Rhalibi is Professor of Entertainment Computing and Head of Strategic Projects at Liverpool John Moores University. He is Head of Computer Games Research Lab at the Protect Research Centre. He has over 22 years' experience doing research and teaching in Computer Sciences. Abdennour has worked as lead researcher in three EU projects in France and in UK. His current research involves Game Technologies and Applied Artificial intelligence. Abdennour has been leading for six years several projects in Entertainment Computing funded by the BBC and UK based games companies, involving cross-platform development tools for games, 3D Web-Based Game Middleware Development, State Synchronisation in Multiplayer Online Games, Peer-to-Peer MMOG and 3D Character Animation. Abdennour has published over 150 publications in these areas. Abdennour serves in many journal editorial boards including ACM Computer in Entertainment and the International Journal of Computer Games Technologies. He has served as chair and IPC member in over 100 conferences on Computer Entertainment, AI and VR. Abdennour is member of many International Research Committees in AI and Entertainment Computing, including IEEE MMTC IG: 3D Rendering, Processing and Communications (3DRPCIG), IEEE Task Force on Computational Intelligence in Video Games and IFIP WG 14.4 Games and Entertainment Computing.


In this talk, Prof. Abdennour El Rhalibi will present an overview of his research in game technologies at LJMU. He will present some recent projects developed with BBC R&D, on game middleware development and in facial animation. In particular he will introduce a novel framework for coarticulation and speech synchronization for MPEG-4 based facial animation. The system, known as Charisma, enables the creation, editing and playback of high resolution 3D models; MPEG-4 animation streams; and is compatible with well-known related systems such as Greta and Xface. It supports text-to-speech for dynamic speech synchronization. The framework also enables real-time model simplification using quadric-based surfaces. The coarticulation approach provides realistic and high performance lip-sync animation, based on Cohen-Massaro\'s model of coarticulation adapted to MPEG-4 facial animation (FA) specification. He will also discuss some experiments which show that the coarticulation technique gives overall good results when compared to related state-of-the-art techniques.

Speaker Presentations

Speaker PDFs