메뉴 메뉴
닫기
검색
 

WORLD

제 28 호 More Data, Less Truth?

  • 작성일 2025-12-16
  • 좋아요 Like 0
  • 조회수 1512
이소이

A Warning from the Naver Dokdo Case

Kicker: LIFE




More Data, Less Truth?

: A Warning from the Naver Dokdo Case



            By Leesoi, Cub-Reporter

                              Leesoi3157@naver.com 




On October 30, the AI search function of Naver, South Korea’s largest portal site, sparked controversy after it presented Dokdo as part of Japanese territory.When users typed ‘Japanese territory’ into the search bar then AI responded, “The Japan’s territory consist of Takeshima, the Northern Territories, the Senkaku Islands, and more.” The issue, first surfaced by Professor Seo Kyung-duk of Sungshin Women’s University, was settled after Naver announced that, as of 8:09 a.m, it had prevented its AI search service AI Briefing from displaying answers suggesting that ‘Dokdo is Japanese territory.’ Naver has since taken additional measures, and the AI Briefing response no longer appears when users search for that problem. Naver a global platform widely used by international users, faced strong criticism.As the Dokdo territorial dispute is a sensitive and important national issue, domestic platforms must first accurately reflect historical facts. AI briefing is not to be based on data that the model has learned, it provides answers to content search results. The problem arising while referring to the territorial description results presented on the websites. Therefore, depending on the documents within the utilized searched results, AI’s response contains inaccurate information.Similar cases can find easily sites that provide information based on AI like Google andWikipedia.While Naver resolved the immediate issue, this case raises a deeper question of what happens when AI systems present information without proper verification. As AI powered services become central to how people access information, the responsibility of those who design, manage, and verify these systems becomes increasingly important. This article explores that responsibility and examines why stronger verification processes and ethical awareness among future AI practitioners are becoming essential.





Why accuracy matters for national platform   


                                                텍스트, 지도, 스크린샷이(가) 표시된 사진

(Naver AI’s screen showing Dokdo as Japanese territory (Photo by Professor Seo Kyung-Deok, Yonhap news)



Although this incident may appear to be nothing more than a simple technical glitch, it raises deeper and more fundamental questions about how AI systems handle information at the very earliest stages of processing. When an AI model receives new input, it does not “understand” or “interpret” truth in a human sense. Instead, it identifies patterns, predicts relationships, and generates outputs based on statistical likelihood. This raises an important issue: if the initial information fed into the system contains inaccuracies—or if the model draws on ambiguous or flawed training data—then the AI may confidently reproduce or even amplify those errors. The Dokdo mislabeling case illustrates this vulnerability with clarity. Territorial sovereignty is not merely a factual matter but a historically sensitive and politically charged topic, intertwined with national identity and diplomatic relations. Even a moment in which a platform incorrectly identifies Dokdo as Japanese territory carries the potential to distort public understanding, trigger public outrage, and damage the platform’s credibility both domestically and internationally. With global users increasingly relying on Korean platforms, the stakes are higher than ever. This is why the incident cannot be dismissed as an isolated technical mishap. Instead, it exposes the structural limitations of current AI systems and the platforms that deploy them. It highlights the need for more transparent data pipelines, better monitoring of AI-generated outputs, and stronger human oversight—especially for topics with profound social or historical implications. Ultimately, the error invites us to rethink not only how AI operates, but also how responsibly we build, supervise, and trust the tools that shape public knowledge.




The hidden pipeline of machine learning

AI-generated responses may appear authoritative, but their internal mechanisms are fundamentally probabilistic. Large language models process text by breaking input into tokens, analyzing contextual relationships, and predicting the most likely next word based on patterns learned during training. Although these models can detect certain inconsistencies, they do not possess an inherent fact checking system. Consequently, AI often relies on the data it was trained on whether that data was entirely accurate. This explains why AI still makes mistakes: it is not a verification tool but a probability-driven model, despite the common expectation that AI can do anything a user wants. Machine learning and deep learning systems operate through a complex, multi-stage pipeline designed to identify patterns and generate coherent outputs. During the training phase, models are exposed to massive datasets containing both accurate and inaccurate information, enabling them to learn statistical correlations rather than concrete truths. Once trained, the model interprets new input through multiple neural network layers that evaluate context, probability, and relevance. However, these systems do not double check facts in the way humans do they simply estimate the most likely response based on previous data. As a result, if training data or internal representations contain misconceptions, the model may confidently produce incorrect information. This limitation underscores a crucial reality AI does not inherently understand truth it merely predicts patterns.



(created by Midjourney)



The power of AI today

Today, AI systems hold unprecedented power in shaping public understanding, influencing discourse, and delivering information across global platforms. Their speed and accessibility allow individuals to obtain answers instantly, yet this convenience comes with significant responsibility. As AI-generated content becomes increasingly integrated into everyday decision-making, its potential to amplify misinformation also grows. Therefore, the true power of AI lies not in its ability to generate information, but in how society chooses to guide, supervise, and ethically utilize these technologies. For students pursuing AI-related fields, this moment calls for strong digital literacy, critical thinking, and a deep sense of ethical responsibility. The more AI advances, the more essential human judgment becomes in ensuring that technology serves truth rather than distorting it.




In the age of AI , information can only get closer to the truth through human choices and judgments. Now that various AI sites have emerged and everyone can easily generate information, we must all move beyond critically consuming information to ethically producing it. Media literacy has traditionally focused on those who consume media content, but those who design and manage AI programs should also strengthen their media literacy. Therefore, students aspiring to careers in computing AI should demonstrate a sense of ethical responsibility and devote significant effort to designing information verification and data learning structures.







Sources: https://news.sbs.co.kr/news  

https://www.news1.kr/it-science