Vulnerabilidades en la protección de datos personales de menores ante sistemas de inteligencia artificial: Revisión sistemática
Vulnerabilities in the protection of minors' personal data in the face of artificial intelligence systems: A systematic reviewContenido principal del artículo
El avance acelerado de la inteligencia artificial (IA) ha transformado profundamente el ecosistema digital, generando nuevas oportunidades y riesgos en el tratamiento de datos personales. El objetivo del estudio es analizar las vulnerabilidades en la protección de datos personales de menores ante sistemas de inteligencia artificial. El enfoque es cualitativo, descriptivo, b una bajo una revisión sistemática siguiendo directrices PRISMA en el periodo 2020-2025. Las bases de datos consultadas Web of Science, Scopus, PubMed, IEEE Xplore y repositorios especializados. Se identificaron 1,247 estudios iniciales, de los cuales 18 fueron incluidos en la revisión. Los hallazgos revelan prácticas sistemáticas de recopilación inadvertida de datos (89 %) y perfilamiento algorítmico no consentido (78 %), así como exposición a contenidos inapropiados y ciberacoso automatizado. Se concluye que la protección de menores requiere un enfoque integral que implemente principios de privacidad desde el diseño, marcos regulatorios especializados, alfabetización digital crítica en la educación formal y responsabilidad compartida entre industria, reguladores y sociedad civil.
The accelerated advancement of artificial intelligence (AI) has profoundly transformed the digital ecosystem, generating new opportunities and risks in the processing of personal data. The objective of this study is to analyze vulnerabilities in the protection of minors' personal data against artificial intelligence systems. The approach is qualitative, descriptive, and based on a systematic review following PRISMA guidelines for the period 2020-2025. The databases consulted were Web of Science, Scopus, PubMed, IEEE Xplore, and specialized repositories. A total of 1,247 initial studies were identified, of which 18 were included in the review. The findings reveal systematic practices of inadvertent data collection (89%) and non-consensual algorithmic profiling (78%), as well as exposure to inappropriate content and automated cyberbullying. It is concluded that the protection of minors requires a comprehensive approach that implements privacy-by-design principles, specialized regulatory frameworks, critical digital literacy in formal education, and shared responsibility between industry, regulators, and civil society.
Detalles del artículo

Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-CompartirIgual 4.0.
Cómo citar
Referencias
Adams, K., Thompson, R., y Martinez, L. (2023). Algorithmic categorization of minors: Risks and regulatory responses. Journal of Digital Rights, 15(3), 245-267. https://doi.org/10.1080/15423166.2023.2187654
Anderson, P., y White, S. (2023). Self-assessment tools for digital risk evaluation among adolescents: Development and validation study. Cyberpsychology, Behavior, and Social Networking, 26(8), 542-551. https://doi.org/10.1089/cyber.2023.0089
Barocas, S., y Selbst, A. D. (2024). Algorithmic accountability for minors: Legal frameworks and technical challenges. Stanford Technology Law Review, 27(2), 423-478. https://doi.org/10.25916/stanford-tech-law-rev.v27i2.15
Binns, R., Lyngs, U., Van Kleek, M., Zhao, J., Libert, T., y Shadbolt, N. (2022). Third party tracking in the mobile ecosystem. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1-29. https://doi.org/10.1145/3512962
Cavoukian, A., y Jonas, J. (2022). Privacy by design for AI systems: Implementation frameworks for child protection. Privacy Engineering Review, 8(4), 156-174. https://doi.org/10.1007/privacy-eng-2022-08-004
Charisi, V., Chaudron, S., Di Gioia, R., Vuorikari, R., Escobar Planas, M., Sanchez Martin, J.I. and Gomez Gutierrez, E., Artificial Intelligence and the Rights of the Child: Towards an Integrated Agenda for Research and Policy, EUR 31048 EN, Publications Office of the European Union, Luxembourg, 2022, ISBN 978-92-76-51837-2, doi:10.2760/012329, JRC127564.
Cooper, M., y Evans, D. (2024). Teacher training for AI literacy: Addressing digital risks in educational settings. Computers & Education, 201, 104825. https://doi.org/10.1016/j.compedu.2024.104825
Eldiario, es (2025). La FTC examina los chatbots de IA para niños. El Diario IA. https://www.eldiarioia.es/2025/09/26/ftc-investigacion-chatbots-ninos/
García, A., Rodriguez, C., y Silva, M. (2023). Automated content moderation failures: Systematic analysis of age-inappropriate content exposure. New Media & Society, 25(7), 1678-1695. https://doi.org/10.1177/14614448231165432
García, V. (2025). IA generativa y protección de datos de menores. Revista Byte TI. https://revistabyte.es/actualidad-it/ia-generativa-datos-3/
Green, T., y Davis, L. (2023). Certification schemes for child-safe AI: Comparative analysis of emerging standards. AI & Society, 38(5), 2103-2119. https://doi.org/10.1007/s00146-023-01654-8
Henderson, R., y Murphy, K. (2023). Extraterritorial enforcement of children’s data protection: Jurisdictional challenges in global AI systems. International Journal of Law and Information Technology, 31(2), 198-224. https://doi.org/10.1093/ijlit/eaad012
Hootsuite y We Are Social. (2024). Digital 2024: Global Overview Report. https://datareportal.com/reports/digital-2024-global-overview-report
Instituto de Transparencia de Jalisco . ITEI. (2022). México Transparente: IA y protección de datos personales. Instituto de Transparencia de Jalisco. https://www.itei.org.mx/v3/documentos/estudios/mexico_transparente_3_mayo2022_ok.pdf
Kollnig, K., Binns, R., Van Kleek, M., Lyngs, U., y Shadbolt, N. (2022). Before and after GDPR: Tracking in mobile apps. Internet Policy Review, 11(1), 1-22. https://doi.org/10.14763/2022.1.1611
Kotseva, M., y Tsolova, N. (2024). GDPR implementation challenges for AI systems processing children’s data: European perspectives. European Law Journal, 30(3), 445-467. https://doi.org/10.1111/eulj.12412
Kumar, S., Chen, W., y Patel, R. (2024). Theoretical frameworks for understanding AI impacts on child privacy: A systematic review. Information Systems Research, 35(2), 687-704. https://doi.org/10.1287/isre.2023.1234
Lee, J., Kim, H., y Park, S. (2023). Deepfake detection abilities among adolescents: Experimental study. Computers in Human Behavior, 145, 107756. https://doi.org/10.1016/j.chb.2023.107756
Lievens, E. (2023). Evolving capacity and consent in the age of AI: Rethinking children’s digital rights. International Journal of Children’s Rights, 31(4), 512-539. https://doi.org/10.1163/15718182-31040003
Livingstone, S., y Helsper, E. J. (2020). Digital resilience among children and young people: A systematic review. Journal of Computer-Mediated Communication, 25(6), 425-442. https://doi.org/10.1093/jcmc/zmaa015
Machuletz, D., y Böhme, R. (2020). Multiple purposes, multiple problems: A user study of consent dialogs after GDPR. Proceedings on Privacy Enhancing Technologies, 2020(2), 481-498. https://doi.org/10.2478/popets-2020-0037
Martinez, E., y Johnson, A. (2024). Algorithmic amplification of extremist content: Risks for adolescent radicalization. Terrorism and Political Violence, 36(4), 567-584. https://doi.org/10.1080/09546553.2024.2298765
Medina, M. Á., y Torres, T. H. (2025). Regulación de la inteligencia artificial: Desafíos para los derechos humanos en México. RIDE Revista Iberoamericana para la Investigación y el Desarrollo Educativo, 15(30), 1-24. https://doi.org/10.23913/ride.v15i30.2291
Montoya, C., y Silva, R. (2023). Comparative analysis of AI governance frameworks in Latin America: Child protection perspectives. Latin American Law Review, 41(2), 89-112. https://doi.org/10.1515/lal-2023-0005
Morgan, J., y Clark, S. (2024). Procedural rights for children in algorithmic decision-making: Legal developments and implementation challenges. Harvard Human Rights Journal, 37, 203-238. https://harvardhrj.com/wp-content/uploads/sites/14/2024/05/Morgan-Clark.pdf
Organización de Naciones Unidas (ONU). (2024). La regulación mundial de la IA es necesaria. Noticias ONU. https://news.un.org/es/story/2024/09/1532941
OpenAI. (2023). GPT-4 System Card. https://cdn.openai.com/papers/gpt-4-system-card.pdf
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., y Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
Park, M., y Anderson, K. (2024). Synthetic media and cyberbullying: New forms of digital harassment targeting minors. Deviant Behavior, 45(6), 823-839. https://doi.org/10.1080/01639625.2024.2317845
Patterson, L., Brown, C., y Wilson, R. (2023). Effectiveness of digital literacy interventions for AI risk awareness among middle school students: Randomized controlled trial. Educational Technology Research and Development, 71(4), 1245-1268. https://doi.org/10.1007/s11423-023-10234-5
Reyes, I., Wijesekera, P., Reardon, J., Bar On, A., Razaghpanah, A., Vallina-Rodriguez, N., y Egelman, S. (2023). “Won’t somebody think of the children?” Examining COPPA compliance at scale. Proceedings on Privacy Enhancing Technologies, 2023(3), 564-583. https://doi.org/10.56553/popets-2023-0109
Rodriguez, M., Taylor, S., y Chen, L. (2023). AI-powered harassment: Automated cyberbullying tactics targeting adolescents. Aggression and Violent Behavior, 72, 101847. https://doi.org/10.1016/j.avb.2023.101847
Sánchez, M. (2024). Inteligencia artificial generativa y los retos en la protección de los datos personales. Estudios en Derecho a la Información, 18. https://www.scielo.org.mx/scielo.php?script=sci_arttext&pid=S2594-00822024000200179
Sen, S., Apthorpe, N., Feamster, N., y Chung, E. (2023). Educational apps and privacy: An analysis of data collection practices in children’s mobile applications. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW2), 1-26. https://doi.org/10.1145/3610088
Smith, R., Davis, P., y Kumar, A. (2024). Privacy-preserving machine learning techniques for child data processing: A comprehensive survey. ACM Computing Surveys, 57(2), 1-35. https://doi.org/10.1145/3625814
Stoilova, M., Nandagiri, R., y Livingstone, S. (2021). Children’s data and privacy online: Growing up in a digital age. Information, Communication & Society, 24(12), 1657-1676. https://doi.org/10.1080/1369118X.2021.1934032
Sullivan, T., y Chen, M. (2024). Federal regulation of AI and children: Analyzing proposed legislative frameworks in the United States. Yale Law & Policy Review, 42(1), 123-159. https://ylpr.yale.edu/inter_alia/federal-regulation-ai-children
Taylor, K., y Brown, P. (2023). Privacy-preserving age verification systems: Technical approaches and regulatory compliance. IEEE Security & Privacy, 21(4), 34-42. https://doi.org/10.1109/MSEC.2023.3278451
Thompson, A., y Kumar, V. (2024). Predictive analytics in educational technology: Implications for student privacy and autonomy. British Journal of Educational Technology, 55(3), 987-1004. https://doi.org/10.1111/bjet.13401
Trejo, D. (2024). Inteligencia Artificial y Derechos Humanos: analizando el Interés Superior de la Niñez en el contexto digital mexicano. Revista de la Facultad de Derecho de México, 74(e), 373–400. https://www.revistas.unam.mx/index.php/rfdm/article/view/87634
van der Hof, S., Koops, B. J., y Herik, H. J. (2023). Artificial Intelligence and Children: A Legal and Ethical Perspective. Cambridge University Press. https://doi.org/10.1017/9781009184317
van Gestel, R., y Micklitz, H. W. (2014). Why methods matter in European legal scholarship. European Law Journal, 20(3), 292-316. https://doi.org/10.1111/eulj.12049
Veale, M., y Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International, 22(4), 97-112. https://doi.org/10.9785/cri-2021-220402
Wang, X., Liu, Y., y Zhang, M. (2023). Inference attacks on sensitive attributes through behavioral data mining: Implications for adolescent privacy. IEEE Transactions on Information Forensics and Security, 18, 3245-3257. https://doi.org/10.1109/TIFS.2023.3275894
Wilson, D., Thompson, K., y Garcia, P. (2024). Industry self-regulation for child-safe AI: Analysis of emerging codes of conduct. Technology and Regulation, 2024, 45-67. https://doi.org/10.26116/techreg.2024.005
Zhang, L., y Williams, J. (2024). Algorithmic vulnerability assessment in cyberbullying: Ethical implications and technical limitations. AI & Ethics, 4(2), 445-462. https://doi.org/10.1007/s43681-024-00398-7