Human-Centered Artificial Intelligence

Human-Centered Artificial Intelligence

Research and Applications

1st Edition - May 15, 2022

Write a review

  • Editors: Chang Nam, Jae-Yoon Jung, Sangwon Lee
  • Paperback ISBN: 9780323856485
  • eBook ISBN: 9780323856492

Purchase options

Purchase options
Available
DRM-free (EPub, PDF)
Sales tax will be calculated at check-out

Institutional Subscription

Free Global Shipping
No minimum order

Description

Human-Centered Artificial Intelligence: Research and Applications presents current theories, fundamentals, techniques and diverse applications of human-centered AI. Sections address the question, "are AI models explainable, interpretable and understandable?, introduce readers to the design and development process, including mind perception and human interfaces, explore various applications of human-centered AI, including human-robot interaction, healthcare and decision-making, and more. As human-centered AI aims to push the boundaries of previously limited AI solutions to bridge the gap between machine and human, this book is an ideal update on the latest advances.

Key Features

  • Presents extensive research on human-centered AI technology
  • Provides different methods and techniques used to investigate human-AI interaction
  • Discusses open questions and challenges in trust within human-centered AI
  • Explores how human-centered AI changes and operates in human-machine interactions

Readership

Graduate students, researchers, academics and professionals in the areas of human factors, robotics, social psychology, neuroscience, computer science, and engineering psychology

Table of Contents

  • Cover image
  • Title page
  • Table of Contents
  • Copyright
  • Contributors
  • Foreword
  • Preface
  • Part I. Frameworks of explainable AI
  • Chapter 1. Are AI models explainable, interpretable, and understandable?
  • 1.1. Artificial intelligence: human and thinking machine
  • 1.2. Explainability, interpretability, and understandability of AI
  • 1.3. Why XAI is needed?
  • 1.4. Categorization of XAI
  • Chapter 2. Explanation using model-agnostic methods
  • 2.1. Introduction
  • 2.2. Marginal effect of input feature
  • 2.3. Contribution of each feature
  • 2.4. Surrogate models
  • Appendix
  • Chapter 3. Explanation using examples
  • 3.1. Introduction
  • 3.2. Category of example-based explanations
  • 3.3. Similarity-based methods
  • 3.4. Influence-based methods
  • 3.5. Case studies
  • 3.6. Summary
  • Chapter 4. Explanation of ensemble models
  • 4.1. Introduction
  • 4.2. Ensemble models
  • 4.3. Challenges of explaining ensemble models
  • 4.4. Methods for interpreting ensemble models
  • 4.5. Conclusions
  • Chapter 5. Explanation of deep learning models
  • 5.1. Introduction
  • 5.2. Activation-based models
  • 5.3. Backpropagation-based models
  • Part II. User-centered AI design and development process
  • Chapter 6. AI as an explanation agent and user-centered explanation interfaces for trust in AI-based systems
  • 6.1. Communication with computers: HCI and UX
  • 6.2. Being with friends: new rationality and trust in companion AI
  • 6.3. Explanation for trust: trustworthy AI and explainable AI
  • 6.4. Explanation for results: AI as an explanation agent and explanation interfaces
  • Chapter 7. Anthropomorphism in human-centered AI: Determinants and consequences of applying human knowledge to AI agents
  • 7.1. Introduction
  • 7.2. Anthropomorphism: Using human knowledge for nonhuman targets
  • 7.3. Anthropomorphism in human–AI interaction
  • 7.4. Conclusion
  • Chapter 8. Designing a pragmatic explanation for the XAI system based on the user's context and background knowledge
  • 8.1. Introduction
  • 8.2. Explanation for the XAI system
  • 8.3. Pragmatic explanation of van Fraassen
  • 8.4. Summary and conclusion
  • Chapter 9. Interactive reinforcement learning and error-related potential classification for implicit feedback
  • 9.1. Introduction
  • 9.2. ErrP classification methods for implicit human feedback in RL
  • 9.3. Interactive reinforcement learning
  • 9.4. Discussion
  • 9.5. Conclusion
  • Chapter 10. Reinforcement learning in EEG-based human-robot interaction
  • 10.1. Introduction
  • 10.2. The reinforcement learning problem
  • 10.3. Reinforcement learning in EEG classification
  • 10.4. Reinforcement learning using EEG in robot learning
  • 10.5. Conclusions
  • Part III. Applications in human—AI interaction
  • Chapter 11. Shopping with AI: Consumers' perceived autonomy in the age of AI
  • 11.1. Application of AI in advertising—its influence on consumers
  • 11.2. Prospects and concerns for AI-based advertising
  • 11.3. Challenges in AI-driven ads: a way to garner consumers' trust
  • 11.4. Implications
  • Chapter 12. Use of deep learning techniques in EEG-based BCI applications
  • 12.1. The electroencephalogram and brain–computer interfaces
  • 12.2. Deep learning and EEGNet
  • 12.3. Preparing the environment
  • 12.4. Building and running the model
  • 12.5. Understanding the model
  • 12.6. Conclusions
  • Chapter 13. AI in human behavior analysis
  • 13.1. Introduction
  • 13.2. Human behavior analysis using AI
  • 13.3. Sitting posture analysis using AI algorithms
  • 13.4. Conclusion
  • Chapter 14. AI in nondestructive condition assessment of concrete structures: Detecting internal defects and improving prediction performance using prediction integration and data proliferation techniques
  • 14.1. Introduction
  • 14.2. Machine learning algorithms and applications
  • 14.3. Discussion and conclusions
  • Part IV. Ethics, privacy, and policy in human—AI interaction
  • Chapter 15. Ethics of AI in organizations
  • 15.1. Introduction
  • 15.2. What are the principles of ethical AI?
  • 15.3. Existing organizational theory
  • 15.4. Integrating ethical principles of AI with organizational theory
  • 15.5. Conclusion
  • Chapter 16. Designing XAI from policy perspectives
  • 16.1. Introduction
  • 16.2. Two psychological concerns in AI
  • 16.3. Explainable AI
  • 16.4. Remaining technical and political issues
  • 16.5. Conclusion
  • Chapter 17. Responsible AI and algorithm governance: An institutional perspective
  • 17.1. Introduction
  • 17.2. Fair machine learning
  • 17.3. Explainable machine learning
  • 17.4. Conclusion
  • Author Index
  • Subject Index

Product details

  • No. of pages: 312
  • Language: English
  • Copyright: © Academic Press 2022
  • Published: May 15, 2022
  • Imprint: Academic Press
  • Paperback ISBN: 9780323856485
  • eBook ISBN: 9780323856492

About the Editors

Chang Nam

Chang S. Nam is currently a Professor of Industrial and Systems Engineering at North Carolina State University (NCSU), USA. He is also an associated faculty in the UNC/NCSU Joint Department of Biomedical Engineering as well as Department of Psychology. He received a PhD in human factors and ergonomics from the Grado Department of Industrial and Systems Engineering at Virginia Tech. His primary research interests are human-robot interaction, brain-computer interface, neuroergonomics, and affective computing. Currently, Nam serves as Editor for a journal on Brain-Computer Interfaces. Nam has served as a guest editor for special issues of the International Journal of Human-Computer Interaction.

Affiliations and Expertise

Professor, Industrial and Systems Engineering, North Carolina State University (NCSU), USA; Associated faculty, UNC/NCSU Joint Department of Biomedical Engineering, Department of Psychology; Brain Research Imaging Center (BRIC), UNC

Jae-Yoon Jung

Jae-Yoon Jung is a professor in the department of industrial and management systems engineering (IE) at Kyung Hee University (KHU), Korea, and also an adjunct professor of the department of the department of software convergence (SWCon), KHU. He is currently the director of Graduate Program, IE and Smart Factory Program at KHU. He is leading Industrial AI Lab at KHU. He received the Ph.D., M.S., and B.S. degrees in Industrial Engineering at Seoul National University (SNU), in 2005, 2001, and 1999, respectively. In SNU, he was supervised by prof. Suk-Ho Kang and Yeongho Kim in Intelligent Manufacturing Systems Lab. After that, he visited the Process Mining Group at Eindhoven University of Technology (TU/e) in the Netherland, supervised by prof. Wil van der Aalst. Before joining in KHU, he worked for u-Computing Innovation Center (uCIC), directed by Prof. Jinwoo Park, and he also studied in the Information Management Lab. at SNU, supervised by prof. Jonghun Park.

Affiliations and Expertise

Professor, Department of Industrial and Management Systems Engineering (IE), Kyung Hee University (KHU), Korea; Adjunct professor, Department of the Department of Software Convergence (SWCon), KHU; Director, Graduate Program, IE and Smart Factory Program, KHU

Sangwon Lee

Sangwon Lee is an Associate Professor in Department of Interaction Science and Department of Applied Artificial Intelligence at Sugnkyunkwan University. He is also the director of ID square lab (Interaction Design and Development Laboratory). He received his BS degree from Korea University, and his MS degree and PhD degree from the Pennsylvania State University. His research interests lie in human-AI interaction, user experience, affective computing, user modelling, and explainable artificial intelligence.

Affiliations and Expertise

Associate Professor, Department of Interaction Science and Department of Applied Artificial Intelligence, Sugnkyunkwan University, South Korea; Director, ID square lab (Interaction Design and Development Laboratory)

Ratings and Reviews

Write a review

There are currently no reviews for "Human-Centered Artificial Intelligence"