Protecting Children’s Rights and Data Privacy in the Age of AI

Derek E. Baird
4 min readMar 27, 2023
Photo by Alex Knight

In the digital landscape of today’s world, the intersection of Artificial Intelligence (AI) and children’s privacy has become an increasingly pressing concern. With technology already playing an ever-expanding role in children’s lives, the generation of personal data has soared to unprecedented levels, encompassing online activities, school records, and even health information. As if that weren’t enough cause for worry, the rise of generative AI tools like ChatGPT and Google Bard has added another layer of complexity to this issue.

Children, often unknowingly, contribute vast amounts of data to large language models (LLM), potentially paving the way for the creation of detailed profiles that could be exploited for targeted advertising or other dubious purposes. While countries have instituted laws requiring parental consent for data collection from children under 13, enforcement remains challenging as companies find ways to circumvent these safeguards.

In this critical juncture, efforts are underway to identify and implement the much-needed guardrails to ensure the ethical use of AI in children’s products, with organizations like UNICEF and The World Economic Forum taking the lead in prioritizing children’s privacy and rights in the age of generative AI.

Diversity, Equity, and Bias

Another concern is the potential for AI systems to perpetuate bias and discrimination. If an AI system is trained on biased data, it can produce biased results. For example, suppose an AI LLM is trained on data reflecting societal biases. In that case, it may perpetuate those biases by making decisions based on that data.

There have been several high-profile cases where AI systems are biased against certain groups, such as facial recognition systems that are less accurate for people with darker skin tones. In the case of children, biased AI systems could have long-lasting effects on their future opportunities and life outcomes.

There are calls for greater transparency and accountability in developing and deploying AI systems to address these concerns. This includes ensuring that data sets for training AI systems are diverse and representative and that the algorithms used are regularly audited for bias.

Privacy, AI & Children's Rights

The rise of generative AI has brought new challenges for those designing technology for children. With GenAlpha, the first generation born entirely in the 21st century, growing up in a world where AI is increasingly embedded in their lives, it’s important to learn from past mistakes and ensure that we put up guardrails before allowing children to interact with this powerful technology.

To ensure that AI is designed in a way that respects children’s rights, there are several principles that should be followed. One of these is Designing for Child Rights, which emphasizes the importance of respecting children’s autonomy, safety, and privacy. This includes giving children control over their data, ensuring they are not subject to unfair or harmful treatment, and protecting their privacy by design.

Another principle that should be followed is Privacy by Design, which means building privacy protections into technology from the very beginning. This includes implementing strong data protection measures, limiting data collection to what is necessary for the product to function, and obtaining parental consent before collecting data from children.

Children's Right To Participate

In addition, children need to be involved in developing AI systems that affect them. This includes consulting with children and their peers to understand their needs and concerns and involving them in designing and testing AI systems. By involving children in the development process, we can ensure that AI systems are designed to meet their needs and respect their rights.

The UN Convention on the Rights of the Child (UNCRC) outlines all children's rights, including the right to privacy, the right to be heard, and the right to access information. These principles should be used to guide the development of AI systems for children, ensuring that their rights are respected and that they are not subject to harmful or discriminatory treatment.

Overall, it is essential to ensure that AI systems are developed and deployed in ways that respect children’s rights and best interests. This includes protecting their data privacy, addressing potential biases, and involving them in development.

By doing so, we can ensure that AI is developed to respect children’s rights and protect their privacy while also providing them access to the benefits this technology can offer.

Resources: Ethical AI For Kids

Derek E. Baird, MAEd, is the Chief Privacy Officer at BeMe Health. He is the author of The Gen Z Frequency, which is available in English, German, Vietnamese, Ukrainian, and Chinese editions on Amazon, Blinkist, and other retailers.



Derek E. Baird

Child + Teen Trust & Safety, Digital Child Rights + Wellbeing Expert | Kids & Teen Cultural Strategy | Author | Signal Award-winning podcast writer & producer