In an interview, Vinay Rai, Executive Vice President, Engineering & India Site Lead at Netradyne, speaks with TimesTech on how vision-based edge AI is transforming road safety in India. He explains how real-time, edge-first driver monitoring helps detect drowsiness, reduce false alerts, and bring measurable safety outcomes to long-haul fleets operating in India’s demanding road, lighting, and connectivity conditions.
Read the full interview here:
TimesTech: India’s highways account for a disproportionate share of road crashes. How is vision-based edge AI helping quantify and mitigate drowsiness in real time, especially in high-risk long-haul corridors?
Vinay: Although national and state highways make up less than five per cent of India’s road length, they account for well over half of road accidents and nearly 60 per cent of fatalities, as per the Ministry of Road Transport and Highways. On Indian highways, fatigue usually builds up quietly over long stretches. Traditional systems only tell you something after a major incident, and even then, the cause is often reconstructed from partial evidence or witness statements. Without vision AI running at the edge, you rarely see the true sequence of events/ behaviours that led up to a crash, so incidents get simplified as “speeding” or “driver error” with very little context.
Vision-based edge AI systems like Netradyne’s Driver•i can “see” what is happening in the cab and around the vehicle at every moment of driving. The system analyses 100% of drive time on the edge (Driver•i device) with high AI alert accuracy, rather than just looking at a few triggered events.
When early signs of drowsiness or risky behaviour appear, the system provides real-time audio alerts so the driver can take corrective actions immediately. Because this is all happening at the edge with no dependency on cloud connectivity for real-time processing, it works the same way regardless of whether the vehicle is in a remote area or in a place with strong network coverage. When the network is available, relevant information gets pushed to the cloud to provide visibility to the fleet safety managers.
At a fleet level, data coming in from various vehicles and drivers helps the fleet understand common behaviours that need coaching, which routes and hours of the day/night see more fatigue, which locations seem like clusters of near-misses, and which drivers or trips need extra support. Combined with tools like our scoring and coaching workflows, operators can redesign rosters, rest breaks and identify high-risk corridors, turning road safety into a measurable, cost-per-kilometre metric instead of a reactive line item.
TimesTech: Modern DMS relies heavily on ocular and facial dynamics, PERCLOS, blink velocity, gaze tracking, etc. What makes these computer-vision markers reliable even at night or in India’s harsh lighting conditions?
Vinay: These markers are only useful if they stay accurate in the real world, like low light on a rural stretch, harsh sun at mid-day, glare from oncoming headlamps, sunglasses, and tinted glass. For a DMS to reliably detect drowsiness, clear and consistent “vision” is the first requirement.
We address this in two layers. On the hardware side, our driver-facing camera is paired with infrared (IR) illumination and high dynamic range imaging. This helps capture facial features clearly, even in a variety of conditions and makes the system far more robust to conditions like glare, shadows, or headlights at night, and far less sensitive to sunglasses.
On the software side, our drowsiness models use scientific measures like PERCLOS, blink rates and eyelid velocities, but they’re trained on a large, diverse dataset across several hundred thousand devices deployed on roads in the US, Canada, the UK, Australia, New Zealand and India. That scale of data helps the AI learn what true fatigue looks like, long eye closures, slow blinks, head nods and how to separate those from noise caused by lighting changes or natural head movements. The result is that these markers remain reliable across different times of day, weather and lighting conditions, including the extremes you see on Indian highways.
TimesTech: Netradyne integrates data from both in-cab cameras and vehicle telematics. How does this multimodal fusion help differentiate between momentary distraction and true fatigue within milliseconds?
Vinay: Cameras tell you what the driver is doing; telematics tells you how the vehicle is responding. To understand risk properly, you need both.
A brief distraction might show up as a quick gaze shift or head turn on camera, but if the vehicle is still in lane, speed is steady and the driver corrects immediately, the risk profile is lower. Fatigue looks different: eye and head cues show drowsiness, steering inputs become less precise, the vehicle starts to drift or weave, and speed control becomes inconsistent.
By fusing in-cab video with signals from the vehicle like steering, lane position, acceleration, braking and IMU data, the system can classify whether we are seeing a short-lived lapse or a developing fatigue event. Our platform also supports advanced compound alerts, where multiple risk signals occurring close together are treated as a higher-severity pattern rather than isolated events.
This multimodal fusion significantly strengthens confidence in the alert. Drivers get fewer “false alarms”, fleets get earlier detection of real drowsiness, and interventions, whether an in-cab alert or a coaching follow-up, are better targeted and more credible to the driver.
TimesTech: From a hardware-software perspective, what engineering challenges arise when designing DMS and ADAS systems for India’s unique road, lighting, and connectivity constraints and how have you addressed them?
Vinay: India is a tough environment for any DMS or ADAS solution: dense mixed traffic, variable road quality, inconsistent lane markings, extreme cabin temperatures and dust, plus areas where connectivity is intermittent. If you design only for textbook highway conditions, your system will either fail or generate noise.
On the hardware side, devices need to be compact, rugged and thermally stable, while still delivering high-quality images in harsh sun, low light, rain and dust. Our Driver•i D-450 uses automotive-grade components, wide dynamic range sensors, IR-enabled imaging and an embedded NVIDIA compute platform, designed to operate reliably in hot cabins and challenging mechanical and vibration conditions.
On the software side, a lot of legacy ADAS systems depend mainly on vehicle movement or simple sensor thresholds. That approach would struggle in India, where frequent lane changes or tight merges are sometimes normal defensive driving, not necessarily distraction or drowsiness. By using vision as the primary signal, directly observing what the driver and surrounding traffic are doing, we avoid relying only on proxies and can better separate “typical Indian driving” from truly unsafe behaviour.
Connectivity in India has improved significantly and is comparable to many global markets, but any system succeeds at scale only if it’s resilient to adverse conditions, like occasional network loss. So our system is built on an edge-first architecture: all critical analysis and alerts run on the device in real time, and video and metadata are synchronised to the cloud when coverage is available. This ensures drivers receive feedback instantly, regardless of connectivity, while fleets still get the full dataset later for coaching and analytics.
Finally, any AI system is only as good as the data behind it. With several hundred thousand devices already deployed across continents, including India, we’ve been able to train and refine our models on real-world patterns, not lab assumptions.
TimesTech: AI-led co-drivers promise fewer false alerts and more context-aware decisions. How does model training with Indian driving data improve detection accuracy and reduce alert fatigue for drivers?
Vinay: If drivers are constantly bombarded with irrelevant alerts, they simply tune the system out. The starting point for reducing alert fatigue is teaching the AI what “normal” looks like on Indian roads.
By training our models on actual Indian driving data, explaining how traffic merges, how trucks and buses behave, how two-wheelers interact with heavy vehicles, we help the system understand which close interactions are routine and which combinations of behaviours really signal elevated risk. For example, repeated harsh braking, prolonged tailgating, visible fatigue markers and weaving together tell a very different story than a single close cut-in.
We also use compound alerts that trigger only when multiple risk signals occur in a tight window (e.g., distraction plus following too closely plus hard braking), which further improves precision. The result is fewer, more meaningful alerts that drivers take seriously. Fleets can then use that higher-quality signal for coaching focused on real behaviours, not noise. Over time, that builds trust in the system as a helpful, fair “coach” rather than a constant critic.
TimesTech: As edge intelligence matures, how do you see vision-based systems enabling usage-based insurance, predictive fleet management, and safety optimisation for India’s growing EV and logistics ecosystem?
Vinay: Edge intelligence effectively turns every connected vehicle into a live telemetry and safety node. For insurers, that means moving from simple “how many kilometres did you drive?” models to “how were those kilometres driven?” Vision-based systems like Driver•i analyse 100% of drive time and can quantify behaviours such as harsh versus smooth driving, time spent in high-risk corridors, frequency of near-miss patterns and actual compliance with traffic rules. That opens the door to more nuanced, behaviour/ risk-based insurance products that reward genuinely safe driving.
For fleet operators, the same edge insights become a predictive tool. If we start seeing a rise in fatigue alerts on specific routes, shifts or depots, managers can adjust rosters, rest breaks or routing before those patterns convert into serious incidents. And because analysis happens on the device itself and syncs to the cloud later, this intelligence is available even on long-haul runs where connectivity may not be guaranteed end-to-end.
For EV fleets, you can layer this safety and behaviour data with EV-specific signals like battery health and the charging patterns. Which then helps optimise duty cycles, charging schedules and even route planning to balance range, utilisation and safety. In all these cases, vision-based edge AI is about moving the ecosystem from reacting after an accident or breakdown to acting earlier with clear, data-led decisions.














