Interview: A mathematician researching the “dream encryption”

Publication: Kookmin Ilbo

May 28, 2025

Context: Massive datasets—millions or tens of millions of people’s lifestyle patterns, preferences, social connections, and physical data—are extremely valuable. But how can we truly protect that data? Wouldn’t it be most effective if all computations were done while the data remained locked in an encrypted vault? And if hackers know that they can’t exploit the contents even if they break in, they might just give up anyway.

Key Interviewee: Professor Jeong-hee Cheon (천정희), Department of Mathematical Sciences, Seoul National University

  • He developed the world’s first commercially viable homomorphic encryption (HE) algorithm—often referred to as “dream encryption.”

  • Since founding CryptoLab in 2017, he’s been driving global standards in 4th-generation cryptography.

How he explains HE:

“Think of homomorphic encryption as a remote-controlled robot inside a safe. The robot we developed can perform tasks inside the safe as quickly and skillfully as if you were doing them yourself. Since you never open the safe, you don’t expose it to hackers, so individuals can safely entrust their data to service providers.” 

Problem HE solves:

  • Normally, to compute on data, you must decrypt it—opening the vault, so to speak. That’s when AI systems train on private data (e.g., facial recognition or personalized finance).

  • That split—decrypting data then computing—becomes a major weak point for hackers, and individuals understandably resist providing raw personal data.

  • Homomorphic encryption allows computation directly on encrypted data, eliminating that risk. Professor Cheon sees HE as a technology newly demanded by today’s AI-driven world.

CryptoLab’s progress:

  • Raised ₩21 billion (~USD 15 million) in Series A funding in 2022.

  • Actively exploring HE applications across healthcare, finance, defense, and more.

  • This month, they released EFR, a fully encrypted facial-recognition system that stores and matches faces entirely in encrypted form.

  • They’re also part of the Korean government’s K‑Cloud project—developing an integrated “privacy-preserving AI” system over the next five years.

  • Meanwhile, Professor Cheon fields inquiries from companies building autonomous AI agents, all seeking to minimize hacking risks during the AI assistant’s decision‑making communication process.