
The following is a guest post to the FPF blog authored by Dr. Haksoo Ko, Professor at Seoul National University School of Law, FPF Senior Fellow and former Chairperson of South Korea’s Personal Information Protection Commission. The guest post reflects the opinion of the author only and does not necessarily reflect the position or views of FPF and our stakeholder communities. FPF provides this platform to foster diverse perspectives and informed discussion.
1. Introduction: From “trade-off” rhetoric to mechanism design
I served as Chairman of South Korea’s Personal Information Protection Commission (PIPC) between 2022 and 2025. Nearly every day I felt that I was at the intersection of privacy enforcement, artificial intelligence policy, and innovation strategy. I was asked, repeatedly, whether I was a genuine data protectionist or whether I was fully supportive of unhindered data use for innovation. The question reflects a familiar assumption: that there is a dichotomy between robust privacy protection on one hand and rapid AI/data innovation on the other, and that a country must choose between the two.
This analysis draws on the policy-and-practice vantage point that I gained to argue that innovation and privacy are compatible when institutions establish suitable mechanisms that reduce legal uncertainty, while maintaining constructive engagement and dialogue.
Korea’s recent experience suggests that the “innovation vs. privacy” framing is analytically under-specified. The binding constraint is often not privacy protection as such, but uncertainty as to whether lawful pathways exist for novel data uses. In AI systems, this uncertainty is heightened by the intricate nature of their pipelines. Factors such as large-scale data processing, extensive use of unstructured data, composite modeling approaches, and subsequent fine-tuning or other modifications all contribute to this complexity. The main practical issue is less about choosing among lofty values; it is more about operationalizing workable mechanisms and managing risks under circumstances of rapid technological transformation.
Since 2023, Korea’s trajectory can be read as a pragmatic move toward mechanisms of compatibility—institutional levers that lower the transaction costs of innovative undertakings while preserving proper privacy guardrails. These levers include structured pre-deployment engagement, controlled experimentation environments, risk assessment frameworks that can be translated into repeatable workflows, and a maturing approach to privacy-enhancing technologies (PETs) governance.
Conceptually, the approach aligns with the idea of cooperative regulation: regulators offer clearer pathways and procedural predictability for innovative undertakings, while also deepening their understanding of the technological underpinnings of these new undertakings.
This article distills the mechanisms Korea has attempted in an effort to operationalize compatibility of privacy protection with the AI-and-data economy. The emphasis is pragmatic: to identify which institutional levers reduce legal and regulatory uncertainty without eroding accountability, and how those levers map to the AI lifecycle.
2. Korea’s baseline architecture of privacy protection
2.1 General statutory backbone and regulatory capacity
Korea maintains an extensive legal framework for data privacy, primarily governed by the Personal Information Protection Act (PIPA), and further reinforced through specific guidance and strong institutional capacity of the PIPC. The PIPA supplies durable principles and enforceable obligations, while guidance and engagement tools translate those principles and statutory obligations into implementable controls in emerging contexts such as generative AI.
The PIPA embeds familiar principles into statutory obligations: purpose limitation, data minimization, transparency, and various data subject rights. In AI settings, the central challenge has been their application: how to interpret these obligations in the context of, e.g., model training and fine-tuning, RAG (retrieval augmented generation), automated decision-making, and AI’s extension into physical AI and various other domains.
2.2 Principle-based approach combined with…
The privacy officer’s changing role in the age of innovation and AI
Key takeaways The privacy officer’s role has evolved from compliance to being a strategic …






