Technology

NVIDIA Says AI Future Blends Open-Source and Proprietary Strategies

Jensen Huang told GTC attendees "Proprietary versus open is not a thing. It's proprietary and open" — a line now driving NVIDIA's entire AI strategy.

Dr. Elena Rodriguez3 min read
Published
Listen to this article0:00 min
Share this article:
NVIDIA Says AI Future Blends Open-Source and Proprietary Strategies
Source: nvidianews.nvidia.com

At a special session on open frontier models during NVIDIA GTC, founder and CEO Jensen Huang told attendees: "Proprietary versus open is not a thing. It's proprietary and open." That declaration became the spine of a company blog post published March 25 by Kari Briski, titled "The Future of AI Is Open and Proprietary," which laid out NVIDIA's dual strategy of supporting open-source ecosystems while pursuing proprietary AI infrastructure.

The post framed AI as "the defining technology of our time," fueled by a diverse ecosystem of models — large and small, open and proprietary, generalist and specialist — that is "essential for a future where every application will be powered by AI, every country will build it and every company will use it."

The post drew on backing from across the industry. NVIDIA announced the Nemotron Coalition, a global collaboration in which Black Forest Labs, Cursor, LangChain, Mistral AI, Perplexity, Reflection AI, Sarvam and Thinking Machines Lab will bring together their expertise to collaboratively build open frontier models. Members will collaborate on development of an open model trained on NVIDIA DGX Cloud, with the resulting model open sourced; the first model built by the coalition will underpin the upcoming Nemotron 4 family.

The open-source moves extended beyond model development. On March 24, NVIDIA donated a dynamic resource allocation driver for GPUs to the Kubernetes community. NVIDIA also introduced a new open-source runtime called OpenShell as part of its Agent Toolkit, along with an AI-Q Blueprint built with LangChain that enables developers to create agents capable of searching enterprise knowledge and explaining how answers were produced, using a hybrid architecture that relies on frontier models for orchestration and Nemotron open models for research, a setup NVIDIA said can cut query costs by more than 50%.

AI-generated illustration
AI-generated illustration

The proprietary side of the ledger is equally active. On RTX PRO 6000 Blackwell workstations harnessing up to 4,000 TOPS of local AI compute and 96 gigabytes of GPU memory, NVIDIA's NemoClaw installs the OpenShell runtime as part of its Agent Toolkit, giving enterprises the governance, control and privacy required to tackle complex business tasks entirely on premises. Separately, a March 25 NVIDIA post addressed how power-flexible AI factories could help stabilize the global energy grid, while a March 17 post described NVIDIA and telecom leaders building AI grids to optimize inference on distributed networks.

NVIDIA open models are being adopted by CodeRabbit, CrowdStrike, Cursor, Factory, ServiceNow and Perplexity for agentic AI; LG Electronics and Milestone Systems for physical AI; and Novo Nordisk, Viva Biotech and Manifold Bio for healthcare AI.

NVIDIA contributes open-source training frameworks and one of the world's largest collections of open multimodal data, including 10 trillion language training tokens, 500,000 robotics trajectories, 455,000 protein structures and 100 terabytes of vehicle sensor data, an unprecedented scale of diverse open resources to accelerate innovation in language, robots, scientific research and autonomous vehicles. The breadth of that commitment suggests that for NVIDIA, the open-versus-proprietary debate was never a dilemma to resolve — it was a market to own from both sides simultaneously.

Know something we missed? Have a correction or additional information?

Submit a Tip

Never miss a story.
Get Prism News updates weekly.

The top stories delivered to your inbox.

Free forever · Unsubscribe anytime

Discussion

More in Technology