Human-Centered Artificial Intelligence (VSI: IPMC2022 HCAI)
A Special Issue for Information Processing & Management (IP&M), Elsevier
Note: This special issue is a Thematic Track at IP&MC2022. For more information about IP&MC2022, please visit https://www.elsevier.com/events/conferences/information-processing-and-management-conference.
Title of the Special Issue
Human-Centered Artificial Intelligence (VSI: IPMC2022 HCAI)
- Haoran Xie (Managing Guest Editor)
Associate Professor, Department of Computing and Decision Sciences at Lingnan University, Hong Kong, email@example.com
- Xiaohui Tao
Associate Professor, School of Sciences, University of Southern Queensland, Toowoomba, Australia, firstname.lastname@example.org
- Athena Vakali
Professor, Department of Informatics, Aristotle University, Thessaloniki, Greece, email@example.com
- Qiping Wang
Assistant Professor, Department of Information Management, East China Normal University, firstname.lastname@example.org
Artificial Intelligence (AI) and machine learning, alongside the advances in decision making, prediction, knowledge extraction, and logic reasoning are widely implemented to address challenges in diverse areas, for example, chatbot, machine translation, fraud detection, content recommendation, clinical diagnosis, and autonomous devices. Effective and prevalent as AI is in real-world scenarios, AI-based systems also raise scholars’ and practitioners’ concerns about bias, discrimination, result interpretability, algorithmic transparency, and malicious use of AI. Indeed, “today’s most pressing questions in AI are Human-Centered”, as pointed out by Dr. Perter Norvig in Stanford HAI (Lynch 2021). Such knowledge and concerns on HCAI motivated this Special Issue as an exploration of the matter in the era of Artificial Intelligence.
The appropriate AI adoption can facilitate human welfare, however, as a double-edged sword. Many people show little trust in AI owing to the unawareness of why and how decisions are made by AI systems. It is thus essential for AI to make the decision-making process transparent. By equipping AI systems with explanation capabilities, trust between users and AI is built. Though machines are categorized by their abilities to conduct massive computations, human beings outperform machines in terms of metacognition. For the information generated by machines, humans should infuse values to make reasoned judgments about the information quality. In such a way, humans’ involvement in the design, development, and evaluation of AI systems ensures practical insights, leading to more meaningful and relatable systems to users’ needs. Human-centered AI (HCAI) focuses on humanity benefits from AI via trustworthy and safe systems designed and developed by augmenting human intelligence with machine intelligence. HCAI comprises two categories; i) AI regarding the human condition, which emphasizes AI humanity by incorporating humans’ intention into AI systems and enabling AI to understand commonsense knowledge with respect to ethical and social implications, and ii) promotion of humans’ understanding of AI systems with various approaches for addressing and mitigating errors caused by AI and enhancing users’ confidence in AI decisions.
This Special Issue aims to advance knowledge and understanding of the design, development, deployment, application, and evaluation of human-centered AI and ML systems through in-depth dialogue between scholars and practitioners from diverse areas like human-computer interaction, ML, AI, law, cognitive science, complex systems, and humanities for the investigation and tackling of challenges derived during HCAI’s development. We invite authors to submit their HCAI related research work (including full-length, original, and unpublished research papers based on theoretical or experimental contributions and review studies), especially in explainable AI, interpretable ML, human-centered design, and human-machine-systems.
Possible Topics of Submissions
Topics of interest include, but are not limited to:
- Redefinition of AI
- Knowledge-based expert systems
- Human-centered explainable AI
- Human-machine collaboration, integration, interaction, delegation, dialog
- Human-centered personalization and individualism
- Trustworthy AI, sustainable AI, fair AI, self-explaining AI, symbiotic AI
- Evaluation of HCAI for the good of human
- Human’s intention, cognition, emotion, behavior, interaction in AI design
- Context-specific HCAI
- Human-AI complement and augmentation
- Cognitive tutoring, cognitive tutors, constraint-based tutoring systems
- Human-centered design to fair and responsible AI
- Human-centered knowledge-based tutor
- Human-centered decision support systems
- Augmented intelligence for decision-making
- Human-AI collaborative decision making
- Explanations, transparency, fairness, accountability of algorithmic decisions
- Rationale-generating explainable agent
- Development & evaluation of fair ML models
- Latest trends in HCAI research
- Education of HCAI
- Human-in-the-loop machine learning, reasoning, and planning
- AI governance, accountability, and self-surveillance
- Biases in AI algorithms and misuse of AI
- Ethics and societal impact of AI design
- Legal and ethical bases for responsible AI
|Online submission system is open||January 5, 2022|
|Thematic track manuscript submission due date; authors are welcome to submit early as reviews will be rolling||June 15, 2022|
|Author notification||July 31, 2022|
|IP&MC conference presentation and feedback||October 20-23, 2022|
|Post conference revision due date, but authors welcome to submit earlier||January 1, 2023|
Submit your manuscript to the Special Issue category (VSI: IPMC2022 HCAI) through the online submission system of Information Processing & Management. https://www.editorialmanager.com/ipm/
Authors will prepare the submission following the Guide for Authors on IP&M journal at (https://www.elsevier.com/journals/information-processing-and-management/0306-4573/guide-for-authors). All papers will be peer-reviewed following the IP&MC2022 reviewing procedures.
The authors of accepted papers will be obligated to participate in IP&MC2022 and present the paper to the community to receive feedback. The accepted papers will be invited for revision after receiving feedback on the IP&MC 2022 conference. The submissions will be given premium handling at IP&M following its peer-review procedure and, (if accepted), published in IP&M as full journal articles, with also an option for a short conference version at IP&MC2022.
Please see this infographic for the manuscript flow:
For more information about IP&MC2022, please visit https://www.elsevier.com/events/conferences/information-processing-and-management-conference.
Shana Lynch (2021) Peter Norvig: Today’s Most Pressing Questions in AI Are Human-Centered. URL: https://hai.stanford.edu/news/peter-norvig-todays-most-pressing-questions-ai-are-human-centered, accessed on November 11 2021.