メインのコンテンツにスキップする

申し訳ございませんが、お客様のブラウザには完全に対応しておりませんオプションがありましたら、新しいバージョンにアップグレードするか、 Mozilla Firefox、 Microsoft Edge、Google Chrome、またはSafari 14以降をお使いください。これらが利用できない場合、またサポートが必要な場合は、フィードバックをお送りください。

この新ホームページへのフィードバックを歓迎します。ご意見をお寄せください 新しいタブ/ウィンドウで開く

Elsevier
論文を投稿する
Connect

Child-centered AI researcher reveals critical need for a multisectoral approach

Oxford, UK | 2024年5月9日

Ian Evans別

Photo of a toddler peering at the screen of a smartphone. Photo: GCShutter/E+ via Getty Images

GCShutter/E+ via Getty Images

When AI is used with children, manipulation can be insidious, and even good intentions can lead to questionable results. To guide responsible AI development, we need to establish a dialogue among children, families, researchers and industry.

The use of AI often starts with the best intentions. Consider how AI has been used in some nurseries and preschools: Collecting data on children’s movements to alert staff if a child was about to fall down so they could intervene, or using children’s behavior data to identify early signs of learning disorders.

These were among the examples cited by Dr Jun Zhao 新しいタブ/ウィンドウで開く. As a Senior Researcher in the Department of Computer Science at the University of Oxford 新しいタブ/ウィンドウで開く, she’s studying the effects of AI on children. Her work examines the impact of algorithm-driven decision-making — for example, which TikTok videos get served up to users, how data about children is used, and the potential impact of generative AI.

Jun advocates for a more child-centered approach to AI and noted that it’s essential for researchers and innovators to communicate with each other.

“Right now, that isn’t happening,” she said. “There’s a disconnect between child-centered design communities and the AI innovation community.”

In the case of the nursery school, Jun questioned both the value and the ethics of the projects.

“Do you really need technology for that?” she asked. Meanwhile, “the amount of data they’re collecting from each individual to train the algorithms to achieve this kind of objective is astonishing — and often done without regulation or consent from the children.”

Dr Jun Zhao gives a presentation on "Ethical principles in research (and practice) involving human participants at the University of Oxford, where she is a senior researcher in the Department of Computer Science.

Dr Jun Zhao gives a presentation at the University of Oxford, where she is a senior researcher in the Department of Computer Science.

“There’s a disconnect between child-centered design communities and the AI innovation community.”

Dr Jun Zhao is a Senior Researcher in the Department of Computer Science at the University of Oxford.

JZ

June Zhao, PhD

University of Oxford, Senior Researcher, Department of Computer Science

In the 18 months since the launch of ChatGPT, AI has become one of the hottest topics in research, politics, art and industry. Its effects are being felt everywhere. In STEM, it can speed up the research process and support patient care.

Of course, there are still discussions to be had about the potential downsides to AI, and even if it doesn’t pose an existential threat to humanity, it still needs governance and guardrails to prevent harm. AI involves a huge range of stakeholders, from innovators, industry figures and politicians to teachers, parents and children. And it’s a vivid example of the importance of research communication and multisectoral collaboration in helping our society navigate these complex issues.

The child-centered design community, Jun noted, has existed for decades and has done a great deal of work in examining how to design systems that are child friendly and consider their developmental needs.

“Meanwhile, I see innovators in AI, colleagues of mine in AI, or people who traditionally build healthcare and education systems who’ve suddenly discovered AI, and they just integrate AI into an assistant not always with the careful thinking you would expect. I see a lot of harmful situations being exacerbated because of this disconnection of dialogue.”

Elsevier’s 2022 survey on Confidence in Research 新しいタブ/ウィンドウで開く showed that nearly 70% of respondents from the world of research saw educating others about their field as one of their main roles. Jun’s experience shows the urgency of that role and the ways it can be done effectively. She described a workshop held by the Oxford Child-Centered Design Lab 新しいタブ/ウィンドウで開く she leads:

“We set up an international workshop in Hamburg last year bringing industrial practitioners and AI practitioners into the dialogue,” she said, adding that they’re organizing a special issue about it. “And what we need to do next is go to AI conferences and use workshops there to educate technical researchers who may not know how to approach issues around child safety.”

In some instances, child safety issues can face similar challenges as those around gender and ethnicity in research and health contexts. Jun explained:

If you use a lot of data, but it’s all gained from a small cohort of children, it’s not going to be representative of children of diverse populations. As with a lot of algorithmic predictions, you can end up with bias.

In many cases, such as the use of AI by nursery schools, the ethics of AI are still being sorted out. Jun also highlighted that in the UK alone, a recent report shows that 54% of children who are using generative AI tools have used them to help with schoolwork, but only 40% of schools are talking to their students about what AI is and how it should be used, according to a new report by Internet Matters 新しいタブ/ウィンドウで開く. Summarizing the findings, June said: “AI has the ability to influence children’s creativity and their ability to navigate knowledge. Yet we know little about the effects of raising a generation that doesn’t know that everything they are learning is being personalized by algorithms.”

By way of example, Jun noted that children often like to learn in an explorative, open way following the threads that interest them. “Do we want to end up in situations where children’s education is based around the idea that ‘You didn’t do well with these fractions, so let’s just keep serving you that until you get it right’?”

There is an increasing interest in “fair” AI, Jun added, noting that she’s observed a rising level of discussions in politics about child-related regulation:

The difficulty is going into AI conferences where innovators and industry haven’t started thinking about these issues from children’s perspectives yet. We need to communicate our research to those stakeholders if we’re going to create AI systems that are better aligned to people’s values.

Dr Jun Zhao is a Senior Researcher in the Department of Computer Science at Oxford University. Her research focuses on investigating the impact of AI-based systems on our everyday lives, with a particular emphasis on families and young children. She takes a human-centered approach, focusing on understanding real users’ needs in order to design technologies with tangible, real-world impact.

She is currently leading the Oxford Child-Centered AI Design Lab and the Oxford Martin School EWADA project. She has published over 100 peer-reviewed publications and received multiple best paper awards. She frequently speaks about AI for children and families. She was part of the 100 Brilliant Women in AI and Ethics global initiative 2019-20 to promote diversity and equality in this critical research area.

“The difficulty is going into AI conferences where innovators and industry haven’t started thinking about these issues from children’s perspectives yet. We need to communicate our research to those stakeholders if we’re going to create AI systems that are better aligned to people’s values.”

Dr Jun Zhao is a Senior Researcher in the Department of Computer Science at the University of Oxford.

JZ

June Zhao, PhD

University of Oxford, Senior Researcher, Department of Computer Science

There have already been pockets of success that emphasize the importance of researcher communication. Jun pointed to the Online Safety Act 新しいタブ/ウィンドウで開く in the UK. Before the act was passed in 2023, the discussion around it centered on harmful content and the dangers of strangers approaching children online. Jun’s research revealed a different dimension to the issue:

“We realized that a lot of the discussion around screen time and around the content children were watching could potentially be attributed to behavior manipulation. It wasn’t just about content — it was about the algorithmic systems that would keep children on a platform and potentially surface increasingly harmful material.”

The Online Safety Act does contain provisions about algorithmic manipulation. “That was a positive,” Jun said. “But algorithmic manipulation is continuing to present a real danger as platforms evolve their tactics for keeping children engaged.”

For example, Jun highlighted the lessons content and social media platforms are learning from gaming platforms. Gaming platforms use various mechanisms to keep children interested and engaged, from rewards to recommendations and new types of games, she explained, citing a 2023 report by the 5Rights Foundation: Disrupted Childhood: the cost of persuasive design. Content platforms are analyzing those behaviors and using them to keep children on the platform and interacting with it.

“What’s really harmful for children — and this can be kids at the age of 3 or 4 years old — is that their neural networks aren’t well developed to resist that kind of nudging, but they’re still being exposed to this system of manipulation and reward, Jun said:

That’s something we’re really concerned about — this exploitation of data gathered from children, an exploitation of design patterns to nudge user satisfaction to keep children on a platform for as long as possible.

Jun advocates for discussion and understanding as key tactics for addressing these issues at home. Meanwhile, transparency of sourcing is key for research products that use GenAI. For example, Elsevier’s Scopus AI provides references for statements it generates, giving researchers the opportunity to follow those links back to the source to confirm that findings are in line with the interpretation provided by the AI platform.

Jun agreed that ensuring users are equipped to understand the strengths and limitations of AI platforms is essential:

In lockdown, we saw the rise of technologically complex platforms such as Google Classroom, which integrates learning analytics, personalized learning, and is adding open AI technologies. Teachers and schools don’t really have the resources to understand and manage the advantages and shortcomings of these technologies, and there’s no national program to train them. That really puzzles me.

Still, Jun is confident that by communicating her research, many of the issues around the data AI collects and its potential for shaping children’s behavior can be addressed.

“We’ve bene able to reach a lot of schools, children and families with our research and outreach programs, and we’ve had a lot of rewarding conversations,” she said. “They share the concerns that we have, and they’re aware of these issues.

“It gives me hope to hear a child of 10 or 11 saying, ‘YouTube shouldn’t have our data — it belongs to us.’ They question why it’s not being regulated, and they are craving better control. It gives me a strong sense of encouragement.”

Jun observed that the generation currently growing up with YouTube (and) with AI-based learning technologies in the classroom are one day going to be shaping the future. The digital habits that they consider “normal” will be embedded in the technologies that they themselves build:

“If these future technologies are shaped by people who think behavior-nudging is fine, who think that any and all data collection is fine, then it becomes a dangerous future to imagine,” she said. “But the engagements that we’re undertaking show that people respond really well to messages about the more responsible use of technology, so that gives me hope.

“Nonetheless, we’re reaching hundreds, maybe thousands, of people at most, so as a research community, we need to amplify voices around responsible innovation. It’s something every researcher and innovator needs to consider. It can’t just be a narrow topic.”

貢献者

Portrait photo of Ian Evans

IE

Ian Evans

Senior Director, Editorial and Content

Elsevier

Ian Evansの続きを読む