AI in Silicon: Axiado CEO on Revolutionizing Cybersecurity Hardware

0
19

In an interview, Gopi Sirineni, Founder & CEO of Axiado, spoke with TimesTech about balancing personal responsibility with professional ambition and how Axiado is transforming cybersecurity by embedding AI directly into hardware. He detailed how their Trusted Control Unit (TCU) architecture enables proactive threat detection, combats ransomware, and prepares for post-quantum threats — all while minimizing false positives and ensuring modular, efficient protection.

Read the full interview:

TimesTech: What were the biggest challenges you faced while trying to balance personal dreams with financial responsibilities?

Gopi: If you know the oldest son’s responsibility in the family, it doesn’t matter how young you are. As long as you’re the oldest son, you have a responsibility.

You had to have a behavior and you had to know all these things. You had to be in the front to do it. So that’s the same way for me to be able to work here too.

Take the lead. You’re born to be the leading. That’s all it is.

There’s problems ahead. You take that into account already, expect and face that and learn a way to do it because you have to show the younger people behind you how to handle these things. So that helps me in the work.

TimesTech: Axiado integrates AI directly into Silicon to counter modern cyber threats. Can you explain how this hardware anchored approach outperforms traditional software based security security methods, especially in environments like private 5g networks and data centers?

Gopi: Most of the solutions for cyber security today are based on software solutions. Why they do that is because a bad actor entering into your system is mostly through the software today or a physical access to the hardware.

So all these solutions in the market, whether it is a Palo Alto network as a physical firewall or CloudStrike, Snowflake, Centillion, Google security, Microsoft security, all these things, they do try to protect against a port of entry. The definition of port of entry is where the bad actor comes in. So means mail phishing.

So there’s a better tool to be able to monitor your emails. Somebody is clicking on something, the proxy server clicking on that tool, or somebody has a user ID you created and coming into session. So these are all solved based on or addressed based only on software.

And what we do differently, even after all the security software solutions in the market today, whether it’s a private or public, you talked about a 5g network. Your system hardware is a threat. Your content, you care about what is on the content on the system, what is in the hard disk, what is on the platform to be able to let your compute engines, what you need to do so that engines has been broken or a system is broken or paused for you or locked for you to not be able to use or content is stolen, copied, etc.

So we come into the picture exactly to protect the hardware platform. So in case any time all this stuff happens, our protection, you already have a protection, but they’re able to pass through and came in now trying to take over your system and ask, you know, lock the system and ask for money or take the hard disk and encrypt the hard disk with their own keys and ask for money because you can’t and you cannot see your own content, or they steal the data. In all those three cases, which is exactly where the hardware you touch today, unfortunately, there is no security in that part of the world.

So we come up, like we are a last line of defense sitting next to that, you know. Let’s say it’s a bank heist type, somebody turned off the cameras, broke the doors outside, came in, came inside. Once you come in, all you do today is just nothing but alarms going off, or you’re calling some cops and stuff. By the time cops come back, whenever he can steal the data until the cops come there, he can run away, he can do it.

In real physical life, you can trap him and catch him. But in a digital world, and before you realize that there’s somebody stealing the data, it’s too late because it’s already copied off. So you need somebody who’s close to you, think like we are the ones sitting next to that money chest, an example of a bank heist I talked about.

So we are the last line of defense. Why the AI and hardware? Being the hardware itself, we need to make sure that every pluggable card, The second part of the problem is physical access to the system, means somebody came into a physical access port. So we make sure that we are the first one to boot up on the system, make sure that keys and key management is all registered and authenticated ourselves, and then also let the CPU, main CPU, to turn on itself and authenticate itself too.

Also the drivers are updated and door drivers are registered, and then the plugin cards, AI cards, network cards, accelerator cards, graphics cards, whatever you plug into hardware, we’ll make sure that we attest and authenticate those things. So this is completely a ground-up approach, which was not there before. So we make sure that all the hardware is first, authenticated, attested, means we know this is the right person to do, right card to plug in, that’s belong to you, not the bad actor coming in.

Then on top, every application you’re running it, any application, any LLM model, any AI model you’re running it, you need to be attest, authenticate those also. So there are something called zero trust models, so we follow zero trust models, means you don’t trust any application or any user, any session, because it was running before, you make sure that you do the attestation again. So that’s why this, the difference between completely ground-up approach, so we come from the, you know, ground level platform versus from coming from the top.

We don’t get to see if somebody breaks into your email, somebody is able to break into your website, but we get to see when you’re trying to steal something.

TimesTech: With ransomware as a service and supply chain attacks becoming more prevalent, how does Axiado’s AI-driven TCU, which is Trusted Control Unit, architecture enable proactive threat detection and automated protection at the hardware level?

Gopi: We are next to the hardware, so we make sure that everything is attested. Once the hardware, every part physically connected is verified by us already, now we go into a mode of watchdog, which means we’re now learning the behavior of every user, every card, every session, every LLM model, every AI model which you’re running, and everything will learn the behavior.

Any anomaly happens on that, we match to be able to detect there’s an anomaly, means the behavior is different from user or application or anything. Then we compare to, we have the AI models and a blacklist of patterns, meaning the behavior of how the bad actor will look like. So we take the present, train all attacks in the market, take it and train our AI, how it looks, the patterns and the behavior of that, how does it look like, our AI already learned.

So once you know, following the profile and behavior of it, and you found an anomaly, and then once you found anomaly, means there’s something possibly wrong, then you’re trying to compare to the blacklist of pattern, the pattern what we already got. If it matches, you know that’s a hit, that’s a bad actor, bad behavior, you stop that attack. If you don’t match it, but you still know that’s an anomaly, means something is different is happening, then you go for multi-factor authentication.

Let’s assume that your bad actor is able to give the authentication also, because credentials are compromised. Then we know there is something different, so we collect the forensic data, push away from the system, so that in case I can’t stop the attack, I know where it came from, what happened in the history. Today those things are not available.

And then we also send it to the cloud, and then we train, because we couldn’t stop the attack, then we learn it’s a reinforced model, so this time you’re teaching the AI in the cloud. On the system, it’s more inferencing, and on the cloud is more training for us. Okay, false positives and high deployment costs are major barriers to AI security adoption.

TimesTech: How is Axiado addressing these issues through its MLOPs pipeline and integrated AI models as illustrated in your platform architecture?

Gopi: So, any AI model is simple. What you just said is a problem for anything you train. It’s like training a dog.

You train a dog to be bad, it’s going to be bad. So, we come under that section, I keep using the word multiple times to say, whether it’s a firmware, an application, I also use the word, any LLM model, AI model. So, we attest and authenticate the user, the guy who’s going to train you, or the data set is going to train you, we make sure that happens and that’s for the system side. So, we’re protecting the models for the system side.

The second part of the question is, how do we make sure that,  We don’t give a false positive Yes, so in the beginning, we need at least approximately 100 hours of training before we give you base models based on the, already we have a data set, based on that data set, we’ll provide you that starts with something, the base, but then also we need at least 100 hours of, you know, running that particular application or whatever on the system to make, to validate, yeah, this fits to the models, and then we start collecting the data.

So, as time passes, the efficiency and efficacy of these models, detection will become much better. Day one, obviously not, because that’s not, the beauty of our platform, the models are different, the data sets are different, because it’s learning differently on each one, each platform, because we do collect the sensor data, we get temperature data, fan controls, all that stuff also on the voltage power rails, everybody’s a little different, okay. That needs to be set and learned, so we need some time to be able to come to it.

Okay, how do you stop the false? You don’t need to stop the action, you can quarantine a particular application, you don’t need to, and we’re not pausing the whole system, if you found something. We are basically holding that particular application, or a model, or a user, or a port, or a VMWare, it’s a VM model on that.

So, the modularization, what we do, that helps you that you’re not actually stopping anything and everything. The false positive is not going to be destructive, the way AI models normally happen, for us, but the bigger one is definitely destructive, so we watch out for them.

TimesTech: How is Axiado preparing for the impact of quantum computing on cybersecurity, and what roles will autonomous AI systems play in fortifying future-proof security models?

Gopi: So, the first part of it, we already support a PQC, post-quantum encrypt algorithm. The second part is autonomous, we’re not selling into automobiles yet, and today we’re selling mostly into data centers, where most of the compute engines have been working. So, will it work, autonomous is for the car, when you’re talking about autonomous functions, we’re talking about functions is definitely mostly the mainstay AI takes care of, which is the XP use outside, we only protect that. Our packet information is different, in a regular world of mainstream AI today, everything is like NVIDIA and all this type, they use something called computer vision basements, and everything is image-based and all, that’s inferences, referred as inference, we use something called packet patterns, you know, packet data, packet information, so our inference size is very small, so that way, I don’t need a big capacity computing and all that stuff, so we’ll be able to do pretty fast and what we can do, and we’re not competing with the mainstream AI, that’s different than us.

TimesTech: Given your experience contributing to global networking standards and technologies, how do you see the evolving regulatory environment influencing innovation in AI-based security?

Gopi: It’s, as we talked about, right, training dog has to be better, unfortunately, otherwise, all the bad actors will also get, this is the power tool for a bad actor and good actor, both sides, so regulatory started, there is something called ethical AI systems, and there are rules to be followed, every country is going to have something different, it’s not there yet, but it’s been mandated by, for us, we need to follow all the FIFS compliant, for the security reasons, how the platform, firmware is updated and all that, it’s for the generations of ages, decades, some rules have been followed, so we still supply to or comply to those rules.

LEAVE A REPLY

Please enter your comment!
Please enter your name here