Immersive Video Technologies

Immersive Video Technologies

1st Edition - October 3, 2022

Write a review

  • Editors: Giuseppe Valenzise, Martin Alain, Emin Zerman, Cagri Ozcinar
  • Paperback ISBN: 9780323917551

Purchase options

Purchase options
Available for Pre-Order
Sales tax will be calculated at check-out

Institutional Subscription

Free Global Shipping
No minimum order

Description

Get a broad overview of the different modalities of immersive video technologies—from omnidirectional video to light fields and volumetric video—from a multimedia processing perspective. From capture to representation, coding, and display, video technologies have been evolving significantly and in many different directions over the last few decades, with the ultimate goal of providing a truly immersive experience to users. After setting up a common background for these technologies, based on the plenoptic function theoretical concept, Immersive Video Technologies offers a comprehensive overview of the leading technologies enabling visual immersion, including omnidirectional (360 degrees) video, light fields, and volumetric video. Following the critical components of the typical content production and delivery pipeline, the book presents acquisition, representation, coding, rendering, and quality assessment approaches for each immersive video modality. The text also reviews current standardization efforts and explores new research directions. With this book the reader will a) gain a broad understanding of immersive video technologies that use three different modalities: omnidirectional video, light fields, and volumetric video; b) learn about the most recent scientific results in the field, including the recent learning-based methodologies; and c) understand the challenges and perspectives for immersive video technologies.

Key Features

    • Describes the whole content processing chain for the main immersive video modalities (omnidirectional video, light fields, and volumetric video)
    • Offers a common theoretical background for immersive video technologies based on the concept of plenoptic function
    • Presents some exemplary applications of immersive video technologies

Readership

Engineering and computer science researchers, graduate students researching and learning image and video processing, R&D engineers in industry and managers making technological decisions

Table of Contents

  • PART 1 Foundations

    1. Introduction to immersive video technologies

    Martin Alain, Emin Zerman, Cagri Ozcinar, and Giuseppe Valenzise

    PART 2 Omnidirectional video

    2. Acquisition, representation, and rendering of omnidirectional videos

    Thomas Maugey

    3. Streaming and user behavior in omnidirectional videos

    Silvia Rossi, Alan Guedes, and Laura Toni

    4. Subjective and objective quality assessment for omnidirectional video

    Simone Croci, Ashutosh Singla, Stephan Fremerey, Alexander Raake, and Aljosa Smolic

    5. Omnidirectional video saliency

    Fang-Yi Chao, Federica Battisti, Pierre Lebreton, and Alexander Raake

    PART 3 Light fields

    6. Acquisition of light field images & videos

    Thorsten Herfet, Kelvin Chelli, and Mikael Le Pendu

    7. Light field representation

    Thorsten Herfet, Kelvin Chelli, and Mikael Le Pendu

    8. Compression of light fields

    Milan Stepanov, Giuseppe Valenzise, and Frédéric Dufaux

    9. Light field processing formedia applications

    Joachim Keinert, Laura Fink, Florian Goldmann, Muhammad Shahzeb Khan Gul,

    Tobias Jaschke, Nico Prappacher, Matthias Ziegler, Michel Bätz, and Siegfried Fößel

    10. Quality evaluation of light fields

    Ali Ak and Patrick Le Callet

    PART 4 Volumetric video

    11. Volumetric video – acquisition, interaction, streaming and rendering

    Peter Eisert, Oliver Schreer, Ingo Feldmann, Cornelius Hellge, and Anna Hilsmann

    12. MPEG immersive video

    Patrick Garus, Marta Milovanovic, Joël Jung, and Marco Cagnazzo

    13. Point cloud compression

    Giuseppe Valenzise, Maurice Quach, Dong Tian, Jiahao Pang, and Frédéric Dufaux

    14. Coding of dynamic 3Dmeshes

    Jean-Eudes Marvie, Maja Krivokuca, and Danillo Graziosi

    15. Volumetric video streaming

    Irene Viola and Pablo Cesar

    16. Processing of volumetric video

    Siheng Chen and Jin Zeng

    17. Computational 3D displays

    Jingyu Liu, Fangcheng Zhong, Claire Mantel, Soren Forchhammer, and Rafał K. Mantiuk

    18. Subjective and objective quality assessment for volumetric video

    Evangelos Alexiou, Yana Nehmé, Emin Zerman, Irene Viola, Guillaume Lavoué, Ali Ak,

    Aljosa Smolic, Patrick Le Callet, and Pablo Cesar

    PART 5 Applications

    19. MR in video guided liver surgery

    Rafael Palomar, Rahul Prasanna Kumar, Congcong Wang, Egidijus Pelanis, and

    Faouzi Alaya Cheikh

    20. Immersivemedia productions involving light fields and virtual production LED walls

    Volker Helzle

    21. Volumetric video as a novelmedium for creative storytelling

    GarethW. Young, Néill O’Dwyer, and Aljosa Smolic

    22. Social virtual reality (VR) applications and user experiences

    Jie Li and Pablo Cesar

Product details

  • No. of pages: 630
  • Language: English
  • Copyright: © Academic Press 2022
  • Published: October 3, 2022
  • Imprint: Academic Press
  • Paperback ISBN: 9780323917551

About the Editors

Giuseppe Valenzise

Giuseppe Valenzise is a CNRS researcher (chargé de recherches) at theUniversité Paris-Saclay, CNRS, CentraleSupélec, Laboratoire des Signaux et Systèmes (L2S, UMR 8506), in the Telecom and Networking hub. Giuseppe completed a master degree and a Ph.D. in Information Technology at the Politecnico di Milano, Italy, in 2007 and 2011, respectively. In 2012, he joined the French Centre National de la Recherche Scientifique (CNRS) as a permanent researcher, first at the Laboratoire Traitement et Communication de l’Information (LTCI) Telecom Paristech, and from 2016 at L2S. He got the French « Habilitation à diriger des recherches » (HDR) from Université Paris-Sud in 2019. His research interests span different fields of image and video processing, including traditional and learning-based image and video compression, light fields and point cloud coding, image/video quality assessment, high dynamic range imaging and applications of machine learning to image and video analysis. He is co-author of more than 70 research publications and of several award-winning papers. He is the recipient of the EURASIP Early Career Award 2018. Giuseppe serves/has served as Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology, IEEE Transactions on Image Processing, Elsevier Signal Processing: Image communication. He is an elected member of the MMSP and IVMSP technical committees of the IEEE Signal Processing Society for the term 2018-2023, as well as a member of the Technical Area Committee on Visual Information Processing of EURASIP.

Affiliations and Expertise

Researcher, CentraleSupelec, Laboratoire des Signaux et Systemes, Universite Paris-Saclay, CNRS, France

Martin Alain

Dr. Martin Alain received the Master's degree in electrical engineering from the Bordeaux Graduate School of Engineering (ENSEIRB-MATMECA), Bordeaux, France in 2012 and the PhD degree in signal processing and telecommunications from University of Rennes 1, Rennes, France in 2016. As a PhD student working in Technicolor and INRIA in Rennes, France, he explored novel image and video compression algorithms. Since September 2016, he is a postdoctoral researcher in the V-SENSE project at the School of Computer Science and Statistics in Trinity College Dublin, Ireland. His research interests lie at the intersection of signal and image processing, computer vision, and computer graphics. His current topic involves light field imaging, with a focus on denoising, super-resolution, compression, scene reconstruction, and rendering. Martin is a reviewer for the Irish Machine Vision and Image Processing conference, IEEE International Conference on Image Processing, IEEE Transactions on Image Processing, IEEE International Conference on Multimedia & Expo, IEEE International Workshop on Multimedia Signal Processing, Elsevier Signal Processing: Image Communication, IEEE Transaction on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Transactions on Circuits and Systems I, and IEEE Transactions on Multimedia. He is co-organizer of the special session on Recent Advances in Immersive Imaging Technology held at EUSIPCO 2018 in Rome, ICIP 2019 in Taipei, ICME 2020 in London, and MMSP 2020 in Tampere. He co-organized the tutorial “Immersive Imaging Technologies: from Capture to Display” at ICME 2020 and ACM Multimedia 2020.

Affiliations and Expertise

Postdoctoral Researcher, V-SENSE Project, School of Computer Science and Statistics, Trinity College Dublin, Ireland

Emin Zerman

Emin Zerman is an Associate Senior Lecturer at Mid Sweden University. Previously, he worked as a postdoctoral research fellow at Trinity College Dublin, Ireland, working in V-SENSE project on immersive imaging technologies. He received his Ph.D. degree (2018) in Signals and Images from Télécom ParisTech, France, and his M.Sc. degree (2013) and B.Sc. degree (2011) in Electrical and Electronics Engineering from the Middle East Technical University, Turkey. He is a member of IEEE and IEEE Signal Processing Society. He has been acting as a reviewer for several conferences and peer-reviewed journals, including Signal Processing: Image Communications, IEEE TCSVT, IEEE TIP, IEEE TMM, ACM ToG, ACM JoCCH, ACM Multimedia, QoMEX, EUSIPCO, IEEE ICASSP, IEEE MMSP, IEEE ICME, IEEE ICIP, and NordiCHI. He organized several special sessions on immersive imaging technologies at ICME 2020, MMSP 2020, ICIP 2021, and MMSP 2021. He also organized tutorials at IEEE ICME 2020, ACM Multimedia 2020, and IEEE VCIP 2021. He is interested in human visual perception, user interaction, immersive multimedia, 3D technologies, multimedia quality assessment, and information visualization.

Affiliations and Expertise

Associate Senior Lecturer, STC Research Center, Mid Sweden University, Sweden

Cagri Ozcinar

Dr. Cagri Ozcinar is a researcher at Samsung R&D Institute UK. Before joining Samsung, he was a research fellow within the V-SENSE project at Trinity College Dublin, Ireland. He was a post-doctoral fellow in the Multimedia group at Institut Mines-Telecom Telecom ParisTech, Paris, France. He received the M.Sc. (Hons.) and the Ph.D. degrees in electronic engineering from the University of Surrey. He is an associate editor of signal, image and video Processing (Springer). He has also been serving as a reviewer for many journals and conference proceedings, such as IEEE TIP, IEEE TCSVT, IEEE TMM, CVPR, IEEE ICASSP, IEEE ICME, IEEE ICIP, IEEE QoMEX, IEEE MMSP, EUSIPCO, BMVC, and WACV. He has been involved in organizing workshops, challenges, and special sessions in EUSIPCO, ICIP, ICME, and MMSP. He co-organized the tutorial “Immersive Imaging Technologies: from Capture to Display” at ICME 2020 and ACM Multimedia 2020. He has been involved in R&D projects that have resulted in commercialized on Samsung DTVs. His research interests include deep learning, computer vision, and immersive media.

Affiliations and Expertise

Researcher, Samsung R&D Institute, UK

Ratings and Reviews

Write a review

There are currently no reviews for "Immersive Video Technologies"