The modern era of autonomous driving began in the early 2000s with a series of experimental initiatives organized by DARPA, the U.S. Defense Advanced Research Projects Agency, a government-funded research agency best known for catalyzing breakthrough technologies. Early autonomous vehicle testing was conducted primarily in the favorable climates of California and Arizona, where clear road infrastructure, high-contrast markings, and consistently dry asphalt provided near-ideal test conditions. In conditions where road markings are consistently high and tire grip remains consistently high, self-driving cars have achieved the impressive level of Level 4 autonomy. But the world isn’t all about perfect weather. What works flawlessly in Phoenix can be abysmally ineffective in snowy Minneapolis or during freezing rain in West Virginia.
In this article, we will examine the anatomy of winter blindness (the “whiteout” effect) and the physical reasons why sensors fail in bad weather. We will explore how annotated datasets aid in training a model to see through snow, and we will discuss new neural network architectures that can adapt to complex conditions.
Anatomy of winter blindness
When a person drives a car in a snowfall, they instantly adapt, increase their following distance, brake more smoothly, and intuitively assess the slippery road surface, even when visibility is limited. Autonomous systems rely solely on a stream of raw data. In winter, this data becomes “toxic”, as snow, ice, and poor visibility fundamentally change the physics of signal propagation, creating critical challenges for perceptual models.
LiDAR sensors take the brunt. Under normal conditions, the laser beam bounces off hard surfaces and bounces back. However, during snowfall, a phenomenon researchers call “volume noise” occurs.
According to a 2024 study published in the journal Sensors, snowflakes act like millions of microscopic obstacles. They scatter the laser beams, creating two main problems:
- Signal attenuation: the energy of the beam is absorbed or reflected in the opposite direction, dramatically reducing the effective range of the sensor.
- Phantom interference: the system receives thousands of chaotic reflection points (a cloud of points) directly around the car. Without proper filtering, the algorithm interprets this noise as a wall or static obstacles, leading to false emergency braking (phantom braking).
These challenges become even more evident when working directly with raw LiDAR data, a task that often requires complex, highly customized annotation. One illustrative example comes from a project implemented by the Keymakr team, which involved annotating 3D LiDAR point clouds for autonomous driving tasks. In this case, the team had to reconstruct road markings using only geometric information, without relying on RGB cues. Winter-like effects such as occlusions, noise, and partial erasure of markings meant that annotators had to rebuild continuous line geometry from fragmented, low-visibility data — a process mirroring the exact difficulties autonomous vehicles experience in snowstorms.
To handle these challenges, Keymakr adapted both their tooling and annotation logic. The team worked with extended 3D scenes instead of isolated frames, combining measurements into a unified coordinate system to preserve spatial context. They implemented custom mechanisms to attach confidence indicators to individual polyline points, critical for distinguishing between true breaks in markings and noise caused by occlusion or sensor artifacts. Automated tools also measured and segmented excessively long invisible sections, reducing the risk of human error. This type of meticulous, geometry-first annotation directly contributes to training models that remain stable even when real-world LiDAR data becomes noisy, degraded, or partially unreadable due to snowfall.
Edge AI and snow recognition
During a winter storm, an unstable internet connection on the highway is a common occurrence. Relying on cloud computing to make critical decisions in such conditions is, to put it mildly, dangerous. Therefore, the industry is shifting its focus to Edge AI, which executes complex algorithms directly on the car’s computer.
The main challenge here is limited computing power; the on-board computer must simultaneously control the car, build a route, and process “heavy” data from sensors. One of the solutions involves optimizing neural networks. According to a study published in the IEEE Access journal earlier this year, engineers have developed a highly efficient perception system based on the YOLOv8 architecture.
This algorithm, known for its speed, has been adapted for a dual task in bad weather conditions:
- Snowfall recognition: the system analyzes the video stream in real-time and classifies the intensity of precipitation, separating snow noise from useful objects.
- Adaptive tracking: depending on the determined level of snow, the algorithm dynamically changes the logic of tracking targets (other cars).
The key advantage of this approach is autonomy. The system does not require communication with a data center. It is able to instantly adapt to weather changes; if light snow suddenly turns into a blizzard, Edge AI automatically adjusts its perception filters, providing continuous control over the situation.

Stress testing: geography and seasonality
No simulation, no matter how perfect, can completely recreate a real winter road. Different snow moisture levels, specific local reagents, unpredictable driver behavior on ice — all this requires a physical presence in regions with harsh climates. That is why industry leaders are inventing the latest AI solutions for their autopilots.
In November, information appeared about the start of testing new Zeekr electric cars on the roads of Minneapolis. This city was chosen deliberately: due to severe frosts and frequent snowfalls, it is well-suited for testing the system’s reliability.
The main difference of the new cars (sixth generation) is technical preparation for winter:
- Clean sensors: engineers installed heating and air blowing systems for cameras and lidars. This solves the problem of wet snow sticking, which previously quickly “blinded” the autopilot.
- Real data: tests in the city enable us to observe how the car responds to ice and the narrowing of roads caused by snowdrifts.
Aurora trucks in bad weather
In May, Aurora Innovation took a significant step by announcing the launch of commercial driverless transportation in Texas. However, the launch in the sunny state is just the beginning. Since then, the company has continued to expand its operational footprint and validate its technology under more varied conditions.
Over the course of 2025, Aurora has expanded its driverless routes beyond the initial Dallas-Houston corridor to include a second long-haul lane between Fort Worth and El Paso, covering approximately 600 miles of interstate highway. Additionally, it has initiated nighttime autonomous operations, effectively increasing the daily utilization and resilience of its systems. The company has logged more than 100,000 driverless miles on public roads without safety incidents and is preparing for broader deployment across additional Sun Belt corridors, with plans for hundreds of trucks by late 2026. To support this scaling, Aurora is advancing new hardware with an extended sensing range and working to demonstrate reliable performance in a wider range of environmental conditions, including rain, dust, and heavy winds, as part of the ongoing validation of its autonomous stack.
Future outlook
The main result of 2024-2025 was the understanding that the technology is ready, but the legislation is not. That is why 2026 will be a turning point, not so much because of new inventions, but because of a change in the rules of the game. The introduction of strict certification standards (NHTSA, UNECE) will create an artificial but necessary filter. The requirement to prove the reliability of autopilots in low visibility conditions “on paper” and on the test site will force companies to ensure the appropriate data quality and validation of their unmanned models.
For businesses, this can open up fundamentally new opportunities. Permission for official winter trips signifies a transition from seasonal pilot projects to full-fledged year-round operations in Europe and the US, not just in warm regions with a “laboratory” climate. This dramatically improves the economics of autonomous services. Autopilot is a real working tool that can be scaled and planned for years ahead.


