In a move to embrace the potential benefits of generative AI for younger users while addressing safety concerns, Anthropic, an AI startup, has announced a significant policy shift. The company will now allow minors to access third-party applications powered by its AI models, provided that developers implement specific safety features and comply with applicable regulations.
Outlined in a blog post on Friday, Anthropic’s updated policy aims to strike a balance between enabling educational and tutoring opportunities for minors through AI tools and ensuring responsible use. The company acknowledges that AI can offer significant advantages in areas such as test preparation and tutoring support for younger users.
However, to mitigate potential risks, Anthropic has outlined a set of safety measures that developers creating AI-powered apps for minors must implement. These include age verification systems, content moderation and filtering, and educational resources on “safe and responsible” AI use for minors. Additionally, Anthropic may provide “technical measures” to tailor AI product experiences for minors, such as a “child-safety system prompt” that developers targeting minors would be required to integrate.
Compliance with applicable child safety and data privacy regulations, including the Children’s Online Privacy Protection Act (COPPA), is also mandatory. Anthropic plans to periodically audit apps for compliance and suspend or terminate the accounts of developers who repeatedly violate the requirements. Developers must also clearly state their compliance on public-facing sites or documentation.
Anthropic’s policy change comes as children and teenagers are increasingly turning to generative AI tools for assistance with schoolwork and personal issues. Rival generative AI vendors, including Google and OpenAI, have also been exploring use cases aimed at children, forming dedicated teams and collaborating with organizations focused on child safety and AI guidelines.
While the potential benefits of generative AI for education and personal growth are recognized, concerns have been raised about potential negative impacts, such as the creation of misinformation and deepfakes. Calls for guidelines and regulations on the use of generative AI in education and for minors have been growing, with organizations like UNESCO advocating for safeguards and public engagement.
Anthropic’s new policy represents a step toward addressing these concerns and fostering a responsible integration of generative AI into the lives of younger users, while empowering developers to create innovative solutions tailored to their needs.