chips – RoboticsBiz https://roboticsbiz.com Everything about robotics and AI Thu, 15 May 2025 16:30:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Neuromorphic chips: The brain-inspired future of AI computing https://roboticsbiz.com/neuromorphic-chips-the-brain-inspired-future-of-ai-computing/ Thu, 15 May 2025 16:29:30 +0000 https://roboticsbiz.com/?p=12950 Artificial Intelligence (AI) has reached incredible milestones—large language models like GPT-4 can write essays, summarize documents, and hold human-like conversations, while image generators and video tools are producing stunningly realistic media. But as these models scale, an inconvenient truth emerges: the hardware powering them is rapidly approaching its limits. Today’s AI runs on GPUs—powerful, parallel-processing […]

The post Neuromorphic chips: The brain-inspired future of AI computing appeared first on RoboticsBiz.

]]>
Artificial Intelligence (AI) has reached incredible milestones—large language models like GPT-4 can write essays, summarize documents, and hold human-like conversations, while image generators and video tools are producing stunningly realistic media. But as these models scale, an inconvenient truth emerges: the hardware powering them is rapidly approaching its limits.

Today’s AI runs on GPUs—powerful, parallel-processing chips originally designed for gaming. While they’ve been repurposed to handle the heavy workloads of AI, these chips are inefficient, power-hungry, and increasingly unsustainable. As AI models grow larger, faster, and more capable, the need for a new kind of computing architecture becomes unavoidable.

Enter neuromorphic chips—a revolutionary approach inspired by the most efficient computational system known to us: the human brain. This article explores the limitations of current AI hardware, the promise of neuromorphic design, and how it could power the next generation of intelligent systems.

1. The Problem with Today’s AI Hardware

1.1 The Power-Hungry Nature of GPUs

Modern AI models like GPT-4 are gargantuan in scale. With over 1.76 trillion parameters, training these models requires not just advanced math, but immense energy. For instance, just one round of training for such a model could consume over 41,000 megawatt-hours—enough to power thousands of homes for a year.

Much of this power goes into GPUs (Graphics Processing Units), especially NVIDIA’s state-of-the-art H100 and Grace Blackwell chips. Though exceptionally fast at matrix multiplications (the heart of AI computations), GPUs are extremely inefficient. A single H100 chip can draw up to 700 watts of power—about 35 times more than the human brain, which runs on just 20 watts.

When scaled up to data centers containing tens or hundreds of thousands of GPUs, the energy footprint becomes astronomical—both economically and environmentally.

1.2 Memory Bottlenecks and Latency

Another major hurdle is memory bandwidth. While GPUs process data quickly, they often stall waiting for data from the system’s main memory, managed by the CPU. This back-and-forth communication introduces latency, especially in AI models with trillions of parameters.

The human brain, by contrast, integrates storage and processing within its neural network. Memory isn’t fetched from an external unit—it’s part of the network itself. This unified architecture is both faster and more efficient.

1.3 Poor Handling of Sparse Data

Large AI models work with sparse data—datasets filled with irrelevant or zero values. For example, when generating a sentence, the model uses only a few relevant words from its vast vocabulary. Yet GPUs still process all potential values, including the zeros, wasting energy and time.

Our brains, in contrast, excel at filtering out irrelevant inputs. Whether recognizing faces in a crowd or focusing on a conversation in a noisy room, we process only what’s important.

2. What Makes the Brain So Efficient?

The human brain processes data through a vast network of neurons connected by synapses. But unlike artificial neural networks, real neurons don’t continuously fire. Instead, they build up electrical signals until a threshold is reached—a process called action potential. Only then do they transmit data as spikes to the next neuron.

This event-based processing is energy-efficient. Most neurons remain idle until needed, unlike artificial networks that activate every node for every computation.

This is where spiking neural networks (SNNs) come into play—a newer type of AI architecture designed to mimic how the brain processes information. SNNs work best when paired with hardware that supports this model natively.

Enter neuromorphic chips.

3. Neuromorphic Chips: Mimicking the Brain at the Hardware Level

Neuromorphic chips represent a radical shift in computing. Instead of separating processing (CPU) and memory (RAM), these chips integrate both into a single structure—just like the brain.

Each “artificial neuron” in a neuromorphic chip can store and process data simultaneously. These neurons are connected by electronic pathways analogous to biological synapses. The strength of these connections can change over time, enabling learning and memory—again, just like a biological brain.

Neuromorphic chips are typically designed to work with spiking neural networks, encoding information as bursts of activity or “spikes.” Most nodes remain dormant until stimulated, conserving power and enabling real-time responsiveness.

4. The Materials Powering the Neuromorphic Revolution

Designing chips that emulate the brain requires novel materials—far beyond traditional silicon.

4.1 Transition Metal Dichalcogenides (TMDs)

These are ultra-thin materials, just a few atoms thick, used to create low-power transistors. Their structure allows for efficient switching with minimal energy use, making them ideal for neuromorphic components.

4.2 Quantum and Correlated Materials

Materials like vanadium dioxide or hafnium oxide can switch between insulating and conducting states. This mimics neuron firing behavior—perfect for spiking neural networks.

4.3 Memristors

Short for “memory resistors,” memristors combine processing and memory in one device. They “remember” their resistance state even when powered off, making them ideal for energy-efficient learning and storage. Think of them as smart switches that can be trained to remember pathways—just like synapses in the brain.

5. Real-World Neuromorphic Chips in Action

While neuromorphic computing is still in its early stages, several pioneering chips are already showing promise:

5.1 IBM’s TrueNorth

One of the earliest and most well-known neuromorphic chips, TrueNorth comprises 4,096 neurosynaptic cores in a 64×64 grid. Each core includes 256 artificial neurons and 65,000+ connections. The chip uses spiking signals for communication, operates asynchronously (like the brain), and integrates memory with processing—enabling incredible energy efficiency.

5.2 Intel’s Loihi

Loihi includes 128 neural cores and supports event-driven computation. It can run spiking neural networks natively and is scalable by linking multiple chips together. Loihi is particularly optimized for real-time AI applications, such as robotics and edge computing.

5.3 SpiNNaker (Spiking Neural Network Architecture)

Developed in the UK, SpiNNaker takes a modular approach with multiple processors per chip and high-speed routers for communication. Boards can include dozens of chips, with large configurations surpassing 1 million processors. Its strength lies in real-time parallelism, ideal for simulating biological brains and running large SNNs efficiently.

5.4 BrainChip’s Akida

Akida is designed for low-power, real-time applications like IoT devices and edge AI. It can operate offline, adapt to new data without external training, and scale through a mesh network of interconnected nodes.

6. Other Emerging Players and Technologies

Several companies and research institutions are racing to develop neuromorphic hardware:

  • Prophecy builds event-based cameras that mimic human vision, ideal for robotics and drones.
  • CSense focuses on ultra-low-power neuromorphic processors for wearables and smart homes.
  • Inatera is working on sensor-level processing for smart devices.
  • Rain AI, backed by OpenAI CEO Sam Altman, is developing chips that integrate memory and compute for massive power savings.
  • CogniFiber takes a radical leap by using light (photons) instead of electricity, creating fully optical neuromorphic processors for unprecedented speed and efficiency.

7. The Road Ahead: Challenges and Opportunities

Despite the promise, neuromorphic computing is still a work in progress. Key challenges include:

  • Lack of standardization in architecture and materials
  • Integration with existing software ecosystems
  • Limited commercial deployment so far

However, the long-term potential is enormous. Neuromorphic chips could:

  • Cut energy consumption by orders of magnitude
  • Enable real-time AI on edge devices like smartphones and drones
  • Unlock biologically plausible AI that learns like the brain
  • Overcome the physical limitations of current transistor-based chips

As Moore’s Law nears its end, neuromorphic chips represent a compelling alternative for continued progress in AI.

Conclusion: The Brain-Inspired Future Is Here

The current path of AI development—scaling up models with brute computational force—is rapidly reaching its limits. GPUs, despite their utility, are fundamentally inefficient for the way AI should operate. The human brain has evolved over millions of years to be the most efficient, adaptable, and intelligent computing system we know.

Neuromorphic chips aim to replicate that success in silicon. By combining memory and computation, leveraging spiking signals, and mimicking synaptic learning, these chips offer a transformative path forward—one where intelligence is built not just in code, but into the very architecture of our machines.

As research accelerates and new materials are discovered, neuromorphic computing could very well be the foundation upon which the next generation of AI is built.

The post Neuromorphic chips: The brain-inspired future of AI computing appeared first on RoboticsBiz.

]]>
Dojo supercomputer explained: How Tesla plans to beat Nvidia at AI training https://roboticsbiz.com/dojo-supercomputer-explained-how-tesla-plans-to-beat-nvidia-at-ai-training/ Thu, 01 May 2025 06:32:30 +0000 https://roboticsbiz.com/?p=12733 In the ever-intensifying global race for artificial intelligence supremacy, Tesla has made a bold and strategic move that could redefine its future—not just as an electric vehicle manufacturer, but as a frontrunner in AI infrastructure. At the heart of this transformation lies Dojo, Tesla’s custom-built supercomputer designed to accelerate AI training, enhance Full Self-Driving (FSD) […]

The post Dojo supercomputer explained: How Tesla plans to beat Nvidia at AI training appeared first on RoboticsBiz.

]]>
In the ever-intensifying global race for artificial intelligence supremacy, Tesla has made a bold and strategic move that could redefine its future—not just as an electric vehicle manufacturer, but as a frontrunner in AI infrastructure. At the heart of this transformation lies Dojo, Tesla’s custom-built supercomputer designed to accelerate AI training, enhance Full Self-Driving (FSD) capabilities, and power the next wave of intelligent machines like the Optimus humanoid robot.

Unlike most AI companies that depend on third-party GPU suppliers, Tesla is carving its own path with Dojo, a gamble that CEO Elon Musk admits is high-risk, but potentially game-changing. While Nvidia’s dominance in the GPU market continues to soar, Tesla’s push to build its own AI supercomputing backbone underscores a broader ambition: technological independence, economic scalability, and global AI leadership.

This article dives deep into the origins, purpose, architecture, and future implications of the Dojo supercomputer—and why this might be Tesla’s most pivotal innovation yet.

The Origins of Dojo: From Concept to Crucial Infrastructure

When Elon Musk first teased the concept of a Tesla-built supercomputer at the company’s inaugural Autonomy Day in 2019, the idea seemed audacious. At the time, GPUs were largely confined to gaming and rendering tasks, and supercomputers were typically reserved for niche scientific applications like climate modeling or genome sequencing.

However, in just five years, the technological landscape has shifted dramatically. The explosion of generative AI and machine learning has made powerful GPU clusters indispensable. As the world clamors for more AI compute, Tesla’s early bet on Dojo now appears prescient.

Why Tesla Needs Dojo: Scarcity, Cost, and Strategic Control

Tesla has long relied on Nvidia’s GPU hardware for training its neural networks, investing billions in the company’s H100 and H200 chips. In fact, Tesla’s recent Cortex data center at Giga Texas houses approximately 50,000 Nvidia H100 GPUs. Simultaneously, Elon Musk’s xAI startup is constructing an even larger AI training facility in Memphis, Tennessee, which is expected to house up to 100,000 Nvidia chips—underscoring just how essential these processors are to Musk’s AI ambitions.

But this reliance poses a significant vulnerability. As Musk has noted, the demand for Nvidia chips is so intense that even deep-pocketed firms like Tesla face supply bottlenecks. Musk’s response is pragmatic: the best supplier is no supplier. By building Dojo in-house, Tesla aims to ensure uninterrupted access to AI compute while simultaneously reducing long-term dependency and costs.

Economically, the logic is compelling. While the initial investment for Dojo—reportedly around $500 million for version 1—is steep, it could significantly reduce the cost per unit of AI training over time. Musk has floated the idea that Dojo could eventually become a revenue-generating platform akin to Amazon Web Services (AWS), offering AI compute as a service to other companies.

What Is Dojo? A Purpose-Built AI Training System

So, what exactly is Dojo? Unlike generic supercomputers built from off-the-shelf parts, Dojo is purpose-engineered by Tesla for AI model training—specifically the computer vision systems that power Tesla’s FSD and robotics programs.

The heart of Dojo is the D1 chip, a custom Tesla-designed processor built at TSMC in Taiwan. Unlike Nvidia GPUs, which are general-purpose and highly flexible, the D1 chip and its associated architecture are specialized for one thing: high-throughput AI training. This specificity means Tesla can potentially achieve far greater efficiency for its own workloads than it could with Nvidia’s more universal chips.

Dojo is currently housed in a data center in Buffalo, New York—a location chosen perhaps for its cold climate (ideal for cooling), stable infrastructure, and access to green energy via hydroelectric dams. While version 1 of Dojo is modest in raw scale—estimated to deliver performance equivalent to 8,000 Nvidia H100 GPUs—it is highly optimized for Tesla’s unique needs.

The Dual Path Strategy: Nvidia + Dojo

Tesla isn’t putting all its eggs in one basket. The company is pursuing a dual-path AI infrastructure strategy: continue to use Nvidia GPUs where it makes sense while building out Dojo as a parallel and eventually dominant system. This balanced approach allows Tesla to remain competitive in the short term while investing in a more scalable and self-reliant future.

Musk has also confirmed that Dojo V1 is already online and performing productive AI training tasks. The roadmap includes plans for future versions—Dojo 1.5, Dojo 2, Dojo 3—which will presumably increase in scale, flexibility, and application range.

Despite the excitement, Musk has been transparent about the risks. Dojo is a long-shot bet, but one with a high potential reward. If successful, it could allow Tesla not only to train AI models faster and cheaper but also to evolve into a full-fledged AI platform company.

Applications: From Full Self-Driving to Humanoid Robots

The first and most obvious application for Dojo is Tesla’s Full Self-Driving system. Achieving true autonomy requires processing and labeling immense volumes of video data—something that Dojo is custom-built to accelerate.

With the robotaxi vehicle slated for unveiling in October 2025, the pressure is on. This vehicle will be judged entirely on its software capabilities, meaning the AI must be flawless. Dojo’s ability to rapidly iterate and refine these models will be crucial.

But FSD is just the beginning. Tesla also plans to use Dojo to train AI for its humanoid robot, Optimus. Right now, the training is limited to vision and navigation tasks, much like FSD. But Musk envisions a future where Optimus robots are as ubiquitous as smartphones, helping with household chores, elderly care, and even industrial labor. To support this, Dojo will eventually need to train more generalized AI models, moving beyond vision to include language, reasoning, and manipulation tasks.

This expansion would require a new generation of Dojo hardware—a transition Musk hinted at when he said Dojo V2 will “address current limitations.”

Scaling Global AI: Dojo and the Road Ahead

One of the most compelling justifications for Dojo is the need to scale AI across geographies. While Tesla has amassed an enormous dataset from U.S. and Chinese roads, global deployment of FSD will require equally comprehensive data from every corner of the world—Brazil, India, Europe, Southeast Asia.

Every new environment introduces new road signs, languages, driving behaviors, and infrastructure quirks. Training AI for these diverse conditions will require exponentially more data—and compute power. Dojo can make that training not only feasible but cost-effective.

Tesla’s long-term vision appears to be the creation of a vertically integrated AI empire: data collection from its vehicle fleet, AI training on Dojo, inference on Tesla-designed chips inside vehicles, and real-world deployment in both cars and robots. It’s a loop that no other automaker—or tech company—currently controls end to end.

Economic Outlook: Cost vs. Capability

From a financial standpoint, Dojo’s costs are significant but strategic. Elon Musk has revealed that Tesla’s AI expenditures in 2024 will reach around $10 billion. Roughly half of that will be internal R&D, including the vehicle inference chips and Dojo supercomputing clusters. Nvidia hardware will still account for $3–4 billion—about two-thirds of the hardware spend.

Despite being a smaller slice of the pie, Dojo offers something Nvidia can’t: long-term cost savings and customizability. If Tesla can match or surpass Nvidia’s performance with a system optimized for its own tasks, the economics could tilt decisively in Dojo’s favor.

Moreover, by owning the hardware and the software stack, Tesla opens the door to entirely new business models—whether it’s selling Dojo-as-a-Service to other AI firms, licensing the architecture, or offering hosted training for third-party autonomous systems.

Final Thoughts: A Calculated Gamble That May Just Pay Off

Tesla’s Dojo project exemplifies the kind of calculated risk that defines innovation. It’s a multimillion-dollar moonshot aimed at securing Tesla’s leadership in both automotive and robotics AI. The stakes are high, the road is uncertain, and the returns are anything but guaranteed.

Yet, in an industry where access to compute is fast becoming the most valuable resource, Tesla’s decision to build rather than buy could set it apart. Dojo isn’t just a supercomputer—it’s a strategic fulcrum around which Tesla is balancing its future.

If Musk and his team succeed, Dojo could become the cornerstone of not only Tesla’s next-gen vehicles and humanoid robots but the global AI economy itself.

The post Dojo supercomputer explained: How Tesla plans to beat Nvidia at AI training appeared first on RoboticsBiz.

]]>
How graphics cards work—and why they matter for the future of games and AI https://roboticsbiz.com/how-graphics-cards-work-and-why-they-matter-for-the-future-of-games-and-ai/ Fri, 11 Apr 2025 15:44:29 +0000 https://roboticsbiz.com/?p=12591 In a world where video games simulate real-world physics with astonishing accuracy, where artificial intelligence is transforming industries, and where data moves faster than ever, one unsung hero works quietly in the background: the graphics card. Known technically as the GPU (Graphics Processing Unit), this silicon marvel isn’t just for gamers anymore—it’s a central force […]

The post How graphics cards work—and why they matter for the future of games and AI appeared first on RoboticsBiz.

]]>
In a world where video games simulate real-world physics with astonishing accuracy, where artificial intelligence is transforming industries, and where data moves faster than ever, one unsung hero works quietly in the background: the graphics card. Known technically as the GPU (Graphics Processing Unit), this silicon marvel isn’t just for gamers anymore—it’s a central force in high-performance computing, deep learning, and cryptocurrency mining.

But what exactly is inside a graphics card? What gives it the jaw-dropping ability to perform trillions of calculations per second? How is it different from the CPU? And why is it so well-suited for tasks beyond gaming—like training neural networks and processing massive datasets?

In this article, we crack open the mystery of how graphics cards really work—from their architectural design and computational capabilities to the math they perform and their crucial role in modern technology.

The Mathematics of Modern Gaming

It’s easy to underestimate the processing power required to run today’s most realistic video games. While older games like Mario 64 needed around 100 million calculations per second, modern titles such as Cyberpunk 2077 demand nearly 36 trillion calculations per second. That’s the equivalent of every person on 4,400 Earths each doing one long multiplication problem every second.

It’s not just impressive—it’s mind-bending.

This colossal task is handled by GPUs, which are designed to process massive amounts of simple calculations in parallel. But how do they do it? To understand that, let’s begin with a comparison that often confuses even tech-savvy users: CPUs versus GPUs.

CPU vs GPU: Different Brains for Different Jobs

Think of the CPU as a jumbo jet—fast, nimble, and capable of handling a variety of tasks. It has fewer cores (typically around 24), but each one is highly optimized to perform complex tasks quickly and flexibly.

On the other hand, the GPU is more like a cargo ship—it might be slower in terms of clock speed, but it can carry an enormous load. A high-end GPU can contain over 10,000 cores, each built to handle simple operations en masse.

The key distinction lies in flexibility versus volume. CPUs can run operating systems, manage input/output, and handle diverse software, but they’re not optimized for handling huge volumes of repetitive calculations. GPUs, however, excel at performing a single operation across millions of data points simultaneously. That’s why they dominate in areas like 3D rendering, machine learning, and mining cryptocurrencies.

Anatomy of a Modern GPU: Inside the GA102

Let’s open up a modern high-end GPU chip like NVIDIA’s GA102, which powers the RTX 3080 and 3090 series. With 28.3 billion transistors, the chip is a highly structured hierarchy of processing clusters, all working in unison.

  • 7 Graphics Processing Clusters (GPCs)
  • Each GPC contains 12 Streaming Multiprocessors (SMs)
  • Each SM includes:
    • 4 Warps
    • 1 Ray Tracing Core
    • 32 CUDA Cores per warp (totaling 10,752 CUDA cores)
    • 1 Tensor Core per warp (336 total Tensor cores)

Each of these cores has a specific job:

  • CUDA cores are the general workers, performing simple arithmetic operations crucial for video rendering.
  • Tensor cores are designed for deep learning, performing matrix math required by neural networks.
  • Ray tracing cores simulate the way light interacts with surfaces—essential for hyper-realistic rendering.

Despite their different release dates and price tags, the RTX 3080, 3080 Ti, 3090, and 3090 Ti all use this same GA102 design. The difference? Bin-sorting. During manufacturing, chips with slight defects have specific cores disabled and are repurposed for lower-tier models. This efficient reuse strategy is a clever workaround for manufacturing imperfections.

A Closer Look at a CUDA Core

A single CUDA core might seem small, but it’s a master of efficiency. Comprising about 410,000 transistors, it performs fundamental operations like fused multiply-add (FMA)—calculating A × B + C in a single step using 32-bit numbers.

Only a handful of special function units are available to handle more complex operations like division, square roots, or trigonometric calculations, making CUDA cores ultra-efficient for their intended tasks. Multiplied across thousands of cores and driven by clock speeds of up to 1.7 GHz, GPUs like the RTX 3090 deliver an astounding 35.6 trillion calculations per second.

The Unsung Hero: Graphics Memory

To keep the GPU’s army of cores fed with data, it relies on a high-speed companion: graphics memory. Modern GPUs, like those using Micron’s GDDR6X memory, can transfer up to 1.15 terabytes of data per second. That’s more than 15 times faster than standard system memory (DRAM), which tops out around 64 GB/s.

How is this possible?

It comes down to memory architecture. GDDR6X and the upcoming GDDR7 use advanced encoding techniques (PAM-4 and PAM-3 respectively) to send more data using multiple voltage levels, not just binary 1s and 0s. This allows them to transmit more bits in fewer cycles, achieving high throughput with greater efficiency.

And for ultra-high-performance applications like AI data centers, Micron’s HBM3E (High Bandwidth Memory) takes things even further—stacking memory chips vertically and connecting them with Through-Silicon Vias (TSVs) to form a single, high-density cube with up to 192 GB of memory and significantly reduced power consumption.

How GPUs Handle Massive Workloads: The Power of Parallelism

What makes a GPU uniquely suited to tasks like rendering a complex 3D scene or running a neural network is its ability to solve “embarrassingly parallel” problems. These are tasks that can be broken down into thousands or even millions of identical operations that don’t depend on one another.

GPUs implement SIMD (Single Instruction, Multiple Data) or its more flexible cousin SIMT (Single Instruction, Multiple Threads) to perform the same operation across vast datasets simultaneously.

Take rendering a cowboy hat in a 3D scene. The hat consists of 28,000 triangles formed by 14,000 vertices. To place it in a world scene, each vertex must be transformed from model space to world space. This is achieved using the same mathematical operation applied across every single vertex—perfect for SIMD-style execution.

Multiply that by every object in a modern video game scene (sometimes over 5,000 objects with 8 million vertices) and you’ll see why parallel processing is essential.

Mapping Threads to Hardware: Warps, Blocks, and Grids

In GPU computing, threads (individual instructions) are grouped into warps of 32 threads. These warps form thread blocks, which are managed by streaming multiprocessors. All of these are coordinated by a control unit called the Gigathread Engine.

Originally, GPUs used SIMD where all threads in a warp executed in strict lockstep. However, modern architectures employ SIMT, giving each thread its own program counter, enabling them to diverge and reconverge independently based on conditions—a huge step forward in flexibility and performance.

Beyond Gaming: Bitcoin Mining and Neural Networks

One of the early surprises in GPU evolution was their unexpected effectiveness at bitcoin mining. Mining involves finding a cryptographic hash that meets a strict requirement—basically a number with the first 80 bits as zero. GPUs could run millions of variations of the SHA-256 algorithm every second, giving them an edge in early crypto markets.

However, this edge has faded with the rise of ASICs (Application-Specific Integrated Circuits), which are tailor-made for mining and can outperform GPUs by a factor of 2,600.

Where GPUs still shine is in neural network training, thanks to tensor cores. These perform matrix multiplication and addition at blazing speeds—a key requirement for training large language models and deep learning systems. A single tensor core can calculate the product of two matrices, add a third, and output the result—all in parallel.

Conclusion: The Beating Heart of Modern Computing

Whether it’s powering ultra-realistic game environments, training AI systems, or accelerating scientific simulations, the GPU is a technological marvel. It turns mathematical brute force into seamless virtual worlds, processes that would take human lifetimes into real-time insights, and plays a central role in shaping the digital future.

So the next time you load a game, run a machine learning model, or even just watch a high-resolution video, spare a moment to appreciate the intricate engineering beneath the surface—an orchestration of transistors, memory, and parallel threads working in harmony. That’s the power of a graphics card.

The post How graphics cards work—and why they matter for the future of games and AI appeared first on RoboticsBiz.

]]>
How NVIDIA’s latest AI chips are revolutionizing next-gen robotics https://roboticsbiz.com/how-nvidias-latest-ai-chips-are-revolutionizing-next-gen-robotics/ Mon, 07 Apr 2025 15:29:40 +0000 https://roboticsbiz.com/?p=12552 In the rapidly advancing world of robotics, intelligence is no longer confined to decision-making algorithms or mechanical dexterity. The new age of robots is defined by their ability to perceive, learn, and act autonomously—driven not just by software, but by the sophisticated AI chips embedded within them. At the heart of this transformation stands NVIDIA, […]

The post How NVIDIA’s latest AI chips are revolutionizing next-gen robotics appeared first on RoboticsBiz.

]]>
In the rapidly advancing world of robotics, intelligence is no longer confined to decision-making algorithms or mechanical dexterity. The new age of robots is defined by their ability to perceive, learn, and act autonomously—driven not just by software, but by the sophisticated AI chips embedded within them. At the heart of this transformation stands NVIDIA, the undisputed titan in GPU technology and AI infrastructure.

With its latest generation of AI chips, including the Jetson Orin and Thor, NVIDIA is doing more than just powering devices—it is laying the computational foundation for a new era of robotic intelligence. From autonomous vehicles to humanoid robots, these chips are enabling machines to understand the world like never before. This article explores how NVIDIA’s AI chips are transforming robotics, the design principles behind these silicon marvels, and the future they are helping shape.

The Rise of Robotic Perception and Action

For decades, robots were synonymous with rigid automation—repetitive machines bolted to factory floors, executing pre-programmed tasks with little awareness of their surroundings. That era, however, is fading fast. The next generation of robots are mobile, perceptive, and interactive, with capabilities that mimic human cognition and sensory perception.

Central to this shift is the convergence of visual processing, natural language understanding, and dynamic decision-making—all of which demand vast computational resources. Traditional CPUs fall short in meeting these demands, but NVIDIA’s AI chips, designed specifically for parallel processing, excel in accelerating these workloads.

Robots today are expected to not only process massive visual inputs from cameras and LIDAR but also interpret complex environments, predict human behavior, and even communicate fluently in natural language. These are not just software feats—they are made possible by the raw horsepower and architectural brilliance of chips like NVIDIA’s Orin and Thor.

Jetson Orin: Powering Robots with a Supercomputer in the Palm of Your Hand

Jetson Orin represents a watershed moment for robotic computing. Touted as delivering up to 275 trillion operations per second (TOPS), Orin provides server-class performance in an ultra-compact form factor. This means even small robots can now process multiple AI models simultaneously in real time.

Orin’s versatility has made it a go-to platform across diverse domains—from logistics bots in warehouses to robotic arms in manufacturing plants, and even AI-powered agriculture. Its ability to run complex neural networks for computer vision, SLAM (simultaneous localization and mapping), and object detection makes it indispensable for autonomous navigation and real-time perception.

Perhaps one of the most significant breakthroughs is the ability to fuse sensory data. A robot equipped with Orin can simultaneously process video streams, inertial data, audio inputs, and LIDAR signals to construct a cohesive understanding of its environment. This enables both precise localization and robust decision-making.

Project GR00T and the Dream of General-Purpose Robots

While task-specific robots are already proliferating, the holy grail remains a general-purpose robot—capable of learning, adapting, and performing a wide range of tasks in unpredictable environments. Enter Project GR00T, NVIDIA’s ambitious initiative aimed at developing the AI foundation model for humanoid robots.

Modeled loosely on how large language models (LLMs) like ChatGPT operate, GR00T is designed to enable robots to learn from a broad range of sensor inputs and interactions. Just as LLMs generalize from text, GR00T aims to generalize from visual, tactile, and motor data, allowing robots to adapt to novel situations with minimal reprogramming.

This marks a significant departure from traditional robotics, where behaviors are often handcrafted or trained for narrow tasks. With GR00T and the computational muscle of NVIDIA’s chips, robots will be able to watch humans perform tasks, understand the underlying intentions, and mimic or even improve upon them.

Thor: The Superchip for Autonomous Machines

NVIDIA Thor represents the next leap forward, particularly for more demanding autonomous systems like self-driving cars and humanoid robots. Packing a jaw-dropping 2,000 TOPS of AI performance, Thor unifies multiple computing domains—autonomous driving, cockpit computing, and infotainment—into a single, high-efficiency chip.

This unification has profound implications for both power efficiency and latency reduction. For autonomous machines, the ability to make split-second decisions based on fused sensor inputs is crucial. Thor enables exactly that—integrating vision, LIDAR, radar, and ultrasonic data into one cohesive stream of intelligence.

Beyond performance, Thor also introduces a high degree of flexibility. It can partition compute resources for safety-critical functions and general AI workloads independently. This ensures that mission-critical operations remain deterministic, even while running complex neural networks.

In humanoid robots, Thor can enable the simultaneous execution of vision processing, balance control, natural language conversation, and task planning—all on the same board.

The Role of Simulation: Omniverse and Isaac Lab

Building intelligent robots isn’t just about hardware. Training these systems in the real world is slow, expensive, and often unsafe. NVIDIA addresses this challenge with its simulation platforms—Omniverse and Isaac Lab.

Omniverse provides a high-fidelity, physically accurate digital twin environment where robots can be trained, tested, and refined in virtual worlds. It replicates the physics, lighting, and materials of the real world so that policies learned in simulation can transfer directly to physical robots—what’s known as “sim2real” transfer.

Isaac Lab, NVIDIA’s reinforcement learning platform, accelerates the development of control policies using simulations. Combined with domain randomization techniques, Isaac Lab allows robots to experience thousands of hours of training data in minutes, making them more resilient to real-world variation.

This simulation stack not only saves time and money but democratizes robotics research by making it accessible without requiring fleets of physical robots.

Generative AI Meets Robotics: A New Frontier

One of the most exciting intersections is that of generative AI and robotics. Imagine a robot that can generate its own solutions to novel tasks, reason through instructions given in natural language, or learn from watching YouTube videos. This is not science fiction—it’s the next logical step in merging the power of LLMs and generative models with physical embodiment.

NVIDIA envisions a world where foundation models like GR00T serve as the cognitive engine for robots. These models would draw on vast datasets—images, videos, human demonstrations, text—and use that collective intelligence to execute tasks in the real world.

Generative AI also allows for the creation of synthetic training data, speeding up the development of robust models. Moreover, robots powered by LLMs can engage in richer, more human-like conversations, improving human-robot interaction in homes, hospitals, and beyond.

The Bigger Picture: A Robotics-Centric AI Ecosystem

What NVIDIA is building isn’t just faster chips—it’s a vertically integrated AI ecosystem tailored for robotics. From the silicon (Orin, Thor) to the simulation platforms (Omniverse, Isaac), to the AI models (GR00T), and even the developer tools (Isaac SDK), everything is designed to work cohesively.

This approach mirrors NVIDIA’s success in other domains, such as autonomous vehicles and high-performance computing. It’s not enough to have the fastest hardware—the surrounding infrastructure, tooling, and ecosystem must empower developers, researchers, and enterprises to build and deploy robots at scale.

Through this, NVIDIA is democratizing robotics, lowering the barrier to entry, and accelerating innovation across industries—from agriculture to healthcare to logistics.

Conclusion: Robots With a Brain, and a Purpose

The robot revolution is no longer a distant dream—it’s unfolding right now. And at the core of this revolution is a simple truth: intelligent behavior requires intelligent hardware.

NVIDIA’s latest AI chips—Orin and Thor—are not just processors; they are enablers of perception, cognition, and autonomy. When combined with foundation models like GR00T and the power of simulation, these chips are turning science fiction into engineering reality.

Whether it’s a warehouse robot navigating shelves, a humanoid learning from human demonstration, or an autonomous car interpreting a complex highway scenario, one thing is clear: the brains behind these machines are increasingly being built by NVIDIA.

As robots become more capable and ubiquitous, the companies that power their intelligence will shape the future of human-robot collaboration—and NVIDIA is well on its way to leading that charge.

The post How NVIDIA’s latest AI chips are revolutionizing next-gen robotics appeared first on RoboticsBiz.

]]>
From gaming to AI dominance: How Nvidia redefined the tech industry https://roboticsbiz.com/from-gaming-to-ai-dominance-how-nvidia-redefined-the-tech-industry/ Wed, 02 Apr 2025 16:07:19 +0000 https://roboticsbiz.com/?p=12531 In the world of technology, few companies have experienced a transformation as remarkable as Nvidia. Once known primarily for its high-performance graphics cards tailored to gaming enthusiasts, Nvidia has evolved into a dominant force in artificial intelligence (AI), data centers, and autonomous systems. This journey from a niche gaming hardware manufacturer to an AI giant […]

The post From gaming to AI dominance: How Nvidia redefined the tech industry appeared first on RoboticsBiz.

]]>
In the world of technology, few companies have experienced a transformation as remarkable as Nvidia. Once known primarily for its high-performance graphics cards tailored to gaming enthusiasts, Nvidia has evolved into a dominant force in artificial intelligence (AI), data centers, and autonomous systems. This journey from a niche gaming hardware manufacturer to an AI giant underscores Nvidia’s ability to anticipate industry trends, innovate aggressively, and execute strategic pivots at the right moments.

How did Nvidia achieve this meteoric rise? What key decisions enabled it to transition from gaming to AI, and what lessons can other businesses draw from its success? This article explores Nvidia’s journey, breaking down the milestones, challenges, and strategic moves that reshaped the company’s destiny.

The Humble Beginnings: Gaming and Graphics

Founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, Nvidia initially focused on creating high-performance graphics processing units (GPUs) for gaming. The gaming industry was rapidly evolving, with increasing demand for realistic 3D graphics. Nvidia seized this opportunity by developing GPUs that enhanced gaming experiences, making visuals smoother and more lifelike.

One of its most significant breakthroughs came in 1999 with the launch of the GeForce 256, the world’s first GPU. This innovation revolutionized computer graphics, introducing hardware acceleration for 3D rendering and transforming gaming experiences. The GeForce brand quickly became synonymous with high-performance gaming, establishing Nvidia as a leader in the industry.

Over the next decade, Nvidia continued to refine its graphics technology, introducing innovations such as real-time ray tracing and AI-driven graphics enhancements. These advancements not only solidified its dominance in gaming but also laid the foundation for broader applications of GPU technology.

The Shift Toward Parallel Computing

While gaming remained a stronghold, Nvidia recognized early on that its GPU technology had applications beyond just rendering graphics. Unlike traditional central processing units (CPUs), GPUs excelled at parallel processing—handling multiple calculations simultaneously. This made them ideal for computationally intensive tasks such as scientific simulations, deep learning, and data analysis.

In 2006, Nvidia launched CUDA (Compute Unified Device Architecture), a software platform that allowed developers to harness GPU power for general-purpose computing. CUDA opened the door for researchers and engineers to use Nvidia’s hardware for tasks beyond gaming, setting the stage for its expansion into AI and machine learning.

The introduction of CUDA marked a turning point for Nvidia. Academic institutions, research labs, and tech companies began leveraging GPUs for tasks like protein folding, climate modeling, and even cryptography. By investing in software alongside its hardware innovations, Nvidia positioned itself as a leader in high-performance computing (HPC).

The AI Boom: Nvidia’s Strategic Pivot

As artificial intelligence gained traction in the 2010s, Nvidia found itself in a unique position. Deep learning, the subset of AI responsible for breakthroughs in image recognition, natural language processing, and self-driving cars, relied heavily on massive computational power. GPUs, with their parallel processing capabilities, became the go-to hardware for training AI models.

Nvidia capitalized on this shift by investing heavily in AI research and developing specialized GPUs optimized for deep learning. The launch of the Tesla series GPUs, designed for high-performance computing and AI workloads, signaled Nvidia’s commitment to this new frontier. Companies like Google, Amazon, and Microsoft began adopting Nvidia’s hardware to power their AI-driven applications, further solidifying its dominance.

A key milestone was the release of the Volta architecture in 2017, which introduced Tensor Cores—specialized hardware units designed for deep learning calculations. This innovation drastically improved the speed and efficiency of AI model training, further embedding Nvidia’s GPUs in the AI ecosystem.

Data Centers: Expanding Beyond Consumer Markets

Another pivotal move in Nvidia’s expansion was its focus on data centers. As cloud computing and big data analytics surged, demand for high-performance computing hardware skyrocketed. Nvidia leveraged this trend by developing data center-grade GPUs and AI accelerators tailored for enterprise workloads.

The acquisition of Mellanox Technologies in 2020, a company specializing in high-performance networking, strengthened Nvidia’s position in the data center space. This strategic move allowed Nvidia to offer end-to-end solutions for AI infrastructure, making it a key player in cloud computing and enterprise AI adoption.

Today, Nvidia’s data center business generates billions in revenue, rivaling its gaming segment. The introduction of AI-driven services, such as Nvidia DGX systems and cloud-based AI solutions, further underscores its dominance in this sector.

AI and Autonomous Systems: The Road to Self-Driving Cars

Beyond data centers, Nvidia set its sights on autonomous vehicles. Recognizing the immense computational requirements of self-driving technology, Nvidia developed its Drive platform—an AI-powered system designed to process sensor data, make real-time driving decisions, and enhance vehicle safety.

Major automakers and tech companies, including Tesla, Mercedes-Benz, and Toyota, began integrating Nvidia’s technology into their autonomous driving initiatives. By positioning itself at the intersection of AI and automotive innovation, Nvidia expanded its influence beyond traditional computing markets.

In addition to self-driving technology, Nvidia has also ventured into robotics and edge AI, developing chips that power everything from AI-powered medical devices to industrial automation systems. These initiatives highlight Nvidia’s vision of a world driven by intelligent, autonomous machines.

Challenges and Competitors

Despite its success, Nvidia has faced challenges along the way. Competition from companies like AMD and Intel remains fierce, with rivals developing their own AI-focused hardware. Additionally, regulatory hurdles, such as the failed acquisition of chip designer ARM, have tested Nvidia’s ability to execute major business moves.

Another challenge has been supply chain disruptions, particularly during the global semiconductor shortage. Ensuring steady production of GPUs amidst increasing demand has required strategic partnerships and investments in manufacturing capabilities.

However, Nvidia continues to innovate. With advancements in AI chips, quantum computing, and next-generation GPU architectures, the company remains at the forefront of technological disruption.

The Future: What Lies Ahead for Nvidia?

As AI continues to evolve, Nvidia’s role in shaping its future cannot be understated. With breakthroughs in generative AI, robotics, and real-time computing, the company is well-positioned to remain a leader in the industry. Future areas of growth include AI-powered healthcare, robotics, and the metaverse—each presenting new opportunities for Nvidia to expand its influence.

Additionally, Nvidia’s focus on software ecosystems, including AI frameworks and cloud services, will be critical in maintaining its competitive edge. By fostering an ecosystem where developers can build and deploy AI solutions efficiently, Nvidia ensures continued relevance in a rapidly evolving landscape.

Conclusion

Nvidia’s journey from a gaming hardware manufacturer to an AI powerhouse is a testament to its vision, adaptability, and relentless innovation. By recognizing the potential of GPUs beyond gaming and strategically investing in AI, data centers, and autonomous systems, Nvidia has cemented its status as a technology giant.

For businesses looking to navigate technological shifts, Nvidia’s story offers valuable lessons: stay ahead of trends, embrace innovation, and be willing to pivot when opportunities arise. As Nvidia continues to push the boundaries of what’s possible in AI and computing, one thing is clear—its impact on the future of technology is only just beginning.

The post From gaming to AI dominance: How Nvidia redefined the tech industry appeared first on RoboticsBiz.

]]>
AI hardware – Three key challenges to overcome https://roboticsbiz.com/ai-hardware-three-key-challenges-to-overcome/ Tue, 26 Mar 2024 14:26:15 +0000 https://roboticsbiz.com/?p=11705 AI hardware acts as the foundation upon which the impressive capabilities of Artificial Intelligence are built. Without specialized hardware like GPUs and TPUs, the complex calculations required for training and running AI models would be computationally impractical. Traditional CPUs, while powerful for general tasks, struggle with the parallel processing and specialized operations needed for efficient […]

The post AI hardware – Three key challenges to overcome appeared first on RoboticsBiz.

]]>
AI hardware acts as the foundation upon which the impressive capabilities of Artificial Intelligence are built. Without specialized hardware like GPUs and TPUs, the complex calculations required for training and running AI models would be computationally impractical. Traditional CPUs, while powerful for general tasks, struggle with the parallel processing and specialized operations needed for efficient AI workloads.

This specialized hardware plays a critical role in overcoming limitations and accelerating advancements in AI. By enabling faster training times, lower power consumption during inference, and the ability to handle increasingly complex models, AI hardware is instrumental in unlocking the true potential of AI and its transformative impact on various fields.

Several critical challenges stand in the way of creating the ideal AI hardware solution. This article explores three key hurdles, aptly summarized as the “Three Ds”: Delivering inference at scale, Democratizing AI model development, and catering to Developers.

Challenge #1: Delivering Inference at Scale

The true value of an AI model lies in its ability to be used in real-world applications. Training an AI model is akin to development, while its actual use, known as inference, represents production. Inference can range from a few instances to millions of times per day depending on the application. Furthermore, the growing trend of interactive AI, exemplified by tools like Microsoft’s GitHub Copilot, further increases the need for frequent inference. This heavy reliance on inference exposes a critical issue: power consumption. Running complex AI models can be incredibly energy-intensive. Additionally, in production environments, inference speed and latency become crucial factors impacting overall application performance. Striking a balance between power efficiency, throughput, and latency is a key challenge for AI hardware.

Challenge #2: Democratizing AI Model Development

Similar to any innovation, widespread adoption of AI hinges on its adaptability to diverse user needs. The ability for a broader range of individuals to develop or customize AI models is crucial not only for fostering creativity but also for addressing potential regulatory concerns. Specialization, as advocated by Makimoto’s wave theory, is another key strategy for making AI development more manageable. The recent surge in open-source AI models underscores the importance of future hardware that efficiently supports model fine-tuning, allowing users to tailor existing models to specific applications.

Challenge #3: Empowering Developers

The success of any technology hinges on the efforts of developers who translate its potential into practical applications. The ultimate goal is not just to possess an AI model, but to leverage its capabilities through inference within useful applications. Without a vibrant developer ecosystem, AI remains an unfulfilled promise. History provides ample evidence of this principle. Platforms that failed to prioritize developer needs, such as proprietary Unix systems or the early Macintosh, ultimately struggled to gain traction. NVIDIA’s success in the AI domain is largely attributed to their unwavering commitment to developer tools and software. Any competitor in the AI hardware space must prioritize building a robust developer ecosystem to ensure long-term success.

In conclusion, overcoming the “Three Ds” – Delivering inference at scale, Democratizing AI development, and empowering Developers – is essential for the advancement of AI hardware. By addressing these challenges, we can pave the way for a future where AI fulfills its vast potential and revolutionizes various aspects of our lives.

The post AI hardware – Three key challenges to overcome appeared first on RoboticsBiz.

]]>
System on a Chip (SoC) – Advantages and disadvantages explained https://roboticsbiz.com/system-on-a-chip-soc-advantages-and-disadvantages-explained/ https://roboticsbiz.com/system-on-a-chip-soc-advantages-and-disadvantages-explained/#respond Sat, 06 Feb 2021 14:35:07 +0000 https://roboticsbiz.com/?p=4660 Over the past ten years, as integrated circuits have become increasingly complex and expensive, the semiconductor industry began to embrace impressive new design and reusable methodologies to manage the increased complexity inherent in large chips and keep pace with the levels of integration available. One such emerging trend is System-on-Chip (SoC) technology, making it possible […]

The post System on a Chip (SoC) – Advantages and disadvantages explained appeared first on RoboticsBiz.

]]>
Over the past ten years, as integrated circuits have become increasingly complex and expensive, the semiconductor industry began to embrace impressive new design and reusable methodologies to manage the increased complexity inherent in large chips and keep pace with the levels of integration available.

One such emerging trend is System-on-Chip (SoC) technology, making it possible to fabricate a whole system on a single chip. SoC comes with predesigned and pre-verified blocks, often called intellectual property (IP) blocks, IP cores, or virtual components, obtained from internal sources or third parties and combined on a single chip.

SoC allows semiconductor manufacturers to build smaller and simpler systems embedded in a single chip, resulting in a reduction of cost and increased efficiency of a particular system as opposed to its equivalent board-level system, built with standard parts and additional components.

For example, a computer fabricated on a single chip includes a microprocessor, memory, and various peripherals for running Windows and Linux. It empowers many powerful hardware accelerators for motion video processing, display control, and many hardware peripherals such as camera interface, TFT 24-bit LCD controller, power management, etc.

Therefore, system on chip is becoming increasingly popular in the field of robotics, telecommunications and networking (mobile phones, portable navigation devices), multimedia (DVD players and video games), and consumer electronics products (HDTV, set-top boxes, and iPods).

The quick conversions of board-level designs to system chips add significant value since the SoC solutions are always smaller and usually lower power and mobile. Obviously, the other driving forces are productivity and profit. These systems can provide the speedup of custom hardware without development cost for ApplicationSpecific Integrated Circuits (ASICs).

In SoC, the design components such as CPU, peripherals, and memory blocks are quickly integrated to interconnect architecture, and the hardware and software components are efficiently co-simulated. This solves functional verification, timing convergence, time to market, and cost bottlenecks and achieves an overall design productivity goal. An SoC product is designed with the concept of an embedded system capable of being implemented on a single chip, thereby producing a system that can be placed in any environment, a smaller, faster, and more efficient system.

Let’s now briefly compare the advantages and disadvantages of both board-level systems and system on chip designs.

Board-level systems

Advantages

  • Board-level systems are easy to use and have verified hardware with adaptability to apply modifications to the board.
  • The design of single-level systems is essentially faster compared to the SoC.
  • Board-level debugging has a distinct advantage of visibility. When a particular board-level anomaly arises, the developer can physically modify the board by cutting traces, lifting pins, and adding wires.
  • It enables individual devices to be replaced or upgraded. If a particular device goes wrong, it’s relatively easy to replace.
  • It presents visibility to signals from the various components. The signals that travel from device to device are visible to analysis tools such as oscilloscopes, logic analyzers, etc.

Disadvantages

  • Higher design and engineering costs when it comes to products of high capacity category.
  • Lack of flexibility to implement a lot of customizations.
  • Board-level systems with separate DSP and CPU often require different toolchains to support each device. Furthermore, the delineation of these devices makes CPU-DSP interaction problems challenging to resolve.
  • Having a board with several discrete components can cause problems during the latter part of the support stage.
  • As individual components become scarce or unavailable, finding replacement parts can be tough. This can result in software modifications to support the new “replacement” part.

System on a Chip (SoC)

Advantages

  • SoC basically has smaller footprint and space requirements since all the components are on the same chip and internally connected.
  • A smaller size means it is lightweight.
  • Higher performance and flexibility due to increased amount of circuits on the chip
  • Application-specific SoCs can be cost-efficient.
  • Greater system reliability and lower power requirements
  • SoC provides greater design security at hardware and firmware levels.
  • SoC provides faster execution due to high-speed processors and memory.
  • Drastic reduction in the overall cycle time of the system and superior performance levels

Disadvantages

  • The initial cost of design and development is very high. If the number of SoCs is small, the cost per SoC will be very high.
  • With this kind of highly integrated devices, you can’t simply replace a particular device; you must replace the entire SoC. Even a single transistor or system damage may prove to be very costly.
  • Integrating all systems on a single-chip increases complexity. The biggest complexity involved in SoC designs is how to fit a tremendous amount of logic into a single chip.
  • It is not suitable for power-intensive applications.
  • Visibility into the SoC is limited.
  • Testability issues and time-to-market pressures.

The advantages and disadvantages clearly indicate why System-On-Chip technology is the most sought after by many industries. By integrating multiple chips on a single chip and producing an all-in-one electronics product, companies expect to reap large manufacturing benefits, especially for markets where price and device size are of critical importance.

The post System on a Chip (SoC) – Advantages and disadvantages explained appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/system-on-a-chip-soc-advantages-and-disadvantages-explained/feed/ 0
Top 5 AI chip-making companies leading the smartphone market https://roboticsbiz.com/top-5-ai-chip-making-companies-leading-the-smartphone-market/ https://roboticsbiz.com/top-5-ai-chip-making-companies-leading-the-smartphone-market/#comments Sun, 24 Nov 2019 01:09:00 +0000 https://roboticsbiz.com/?p=466 AI chips in smartphones put the power of neural networks in the palm of your hand. All standard smartphones today have at least one AI-enabled feature, such as intelligent imaging, facial recognition, or a voice-activated personal assistant. But the problem is most of these functions use the Cloud or processing distributed across various chips such […]

The post Top 5 AI chip-making companies leading the smartphone market appeared first on RoboticsBiz.

]]>
AI chips in smartphones put the power of neural networks in the palm of your hand. All standard smartphones today have at least one AI-enabled feature, such as intelligent imaging, facial recognition, or a voice-activated personal assistant. But the problem is most of these functions use the Cloud or processing distributed across various chips such as the CPU and GPU.

A clear advantage of having an onboard AI chip in smartphones is that it allows the processing to be done onboard the device, enabling a new breed of applications to run seamlessly, besides reducing power consumption, response times, etc.

It provides much better data privacy and security because the personal biometric data or other sensitive data remains on the device. Since there is no need for the data to be sent over the internet for processing in the Cloud, the data owners retain complete control over their information.

According to studies, the total number of smartphones equipped with onboard AI chips will jump from 190 million devices to 1.25 billion in 2022. It means that three-quarters of all devices shipped in 2022 will have onboard intelligence, resulting in the AI chip market (currently valued at $6,638 million in 2018) to reach $91,185 million by 2025.

In this article, we will talk about the top 5 AI chip-making companies leading the smartphone market.

1. Qualcomm

Qualcomm has been a world leader offering smart mobile devices and using AI to get the best insights into the future. They’re in the AI-based mobile platform’s third generation. The Qualcomm Snapdragon 845 mobile platform can run directly on the smartphone without cloud access. It also announced Snapdragon 855, its next-generation processor, at the latest Sna

2. Apple

This leading smartphone maker has chips like the A12 Bionic, the first 7 nm smartphone chip with 6.9bn transistors. It boasts a neural engine and a GPU, claiming to be 50pc faster than last year’s A11 chip, which can process 5trn operations a second. With its system on a chip (SoC), Apple can be one of the world’s leading AI-capable chip markets. It had earlier developed 11 Bionic system-on-chip, used in iPhone 8 and 8 Plus. It is a two-core hexa-core processor optimized for performance, 25% faster than the A10 Fusion processor. Apple’s AI chips feature graphics processing unit and a Neural Engine that accelerates artificial intelligence.

3. Huawei

This Chinese smartphone maker got Kirin 980, an artificially smart chip with a seven-nanometer processor. It is seen as smartphone’s next-generation processing technology. The company’s recently launched Mate 20 smartphones flaunt this AI chip, becoming Huawei’s first device to have the world’s first 5G-ready 7 nm AI chip. It would telephone higher intelligence

4. MediaTek

The Taiwanese chipmaker works extensively to bring unique and new AI capabilities to smartphones. Its Helio P90 chipset, launched at an event in Beijing, is one of the company’s latest processors offering great speed. It flaunts the new dual-core APU and AI accelerator, the company’s new AI hardware. The company plans to release it.

5. Samsung

Samsung is one of the latest to introduce next-gen mobile system-on-chip (SoC). Called Exynos 9820, its phone Galaxy S10 was expected to embed it. It launches a dedicated NPU to handle smartphone AI functions on the device. The latest AI chip is quite faster than its predecessors and can perform tasks like image recognition, transmission, etc.

The post Top 5 AI chip-making companies leading the smartphone market appeared first on RoboticsBiz.

]]>
https://roboticsbiz.com/top-5-ai-chip-making-companies-leading-the-smartphone-market/feed/ 1