DeepSeek, AI model a transformative approach to global innovation,”  Says Zhou Hongyi

0
156

BEIJING, MAR 5 (DNA): In the rapidly evolving landscape of Artificial Intelligence(AI), open-source models are emerging as a powerful catalyst for technological democratization and global collaboration.

“Open-source AI represents more than just a technological strategy – it’s a transformative approach to global innovation,” said Zhou Hongyi, a member of the Chinese People’s Political Consultative Conference (CPPCC) and founder of Chinese cybersecurity company 360 Security Group, at this year’s Two Sessions, according to a report carried by China Economic Net on Wednesday.

Open-source AI fundamentally challenges the traditional closed technological ecosystems. “First and foremost, open-source will triumph over closed-source systems,” Zhou asserts.

He highlights DeepSeek, an open-source AI model, as a prime example of this paradigm shift. Within a short span, DeepSeek transformed from an unknown entity to an industry standard that companies and developers worldwide are eager to adopt.

The open-source approach creates a unique ecosystem of collaborative innovation. “By going open-source, a mechanism is created where companies and developers naturally choose to build applications on top of the platform,” Zhou explains.

While a company like DeepSeek might not directly monetize its technology, the returns are substantial: global talent, including developers, engineers, professors, and doctoral students, contribute to improving the technology, creating what Zhou describes as a “biological big bang” of technological development.

One of the most compelling aspects of open-source AI is its potential to level the technological playing field. “For many countries lacking financial resources and technical expertise, open-source models like DeepSeek provide an opportunity to develop their own foundational AI models,” Zhou notes.

This model breaks through national boundaries and technological barriers, fostering an open and inclusive innovation ecosystem where all countries can participate equally in AI development.

The approach has already shown remarkable success. Major Chinese tech companies like Baidu, Tencent, and the three telecommunications operators have integrated open-source models. Internationally, companies such as NVIDIA, Microsoft, and Amazon, initially skeptical, have begun incorporating these models into their ecosystems.

At the international level, China has been proactively championing open-source AI as a model of global cooperation.

Last December, China and Zambia co-chaired a meeting of the Group of Friends for International Cooperation on AI Capacity-building at the UN headquarters.

Representatives from over 80 countries and some UN agencies attended, expecting the Group to boost AI capacity building cooperation, governance, and close the digital divide.

This vision extends beyond technological competition – it represents a new paradigm of global cooperation, where technological advancement is seen as a shared journey rather than a zero-sum game.

Addressing the critical issue of AI safety, Zhou offers a nuanced perspective.

“Not developing AI is the greatest insecurity,” he argues. He believes that AI safety should not be overgeneralized but approached systematically.

The core challenge lies in the model’s susceptibility to manipulation, including “AI hallucinations” (a phenomenon where AI generates false information), potential manipulation attacks, and unauthorized information access.

While Zhou suggests that hallucinations can be mitigated through Internet knowledge base corrections, enterprise-specific knowledge library integration, and multi-model verification approaches.

Interestingly, he views “AI hallucinations” not as a purely negative trait but as a manifestation of intelligence and creativity.

Zhou proposes a revolutionary approach called “model-to-model” regulation, which involves using intelligent models to manage knowledge base access, controlling intelligent agent invocations, and mitigating base model “nonsense” and manipulation attempts.