AI Security Insights: Aravind Putrevu on Risks, Controls & Strategy

0
27

In an interview with TimesTech, Aravind Putrevu, Tech Evangelist, spoke about how AI is reshaping development environments and the urgent need for secure implementation. Drawing from his experience in cloud migration and DevOps, Aravind shared actionable strategies to mitigate AI risks, ensure responsible usage, and build human-in-the-loop systems as AI tools grow more autonomous.

Read the full interview here:

TimesTech: You’ve moved from cloud security to AI-assisted coding. What lessons from past tech shifts apply to today’s AI evolution?

Aravind: Whenever I assisted in the migration of teams out of on-prem databases to a cloud service and then again broke monoliths into microservices, I noticed a couple of obvious trends:

• Make it simple and test. A big-bang migration may include (often unforeseen) issues. When I handled the project to secure the cloud, we introduced one service at a time and caught problems early and corrected in progress. The same possibility is possible with AI features, choose a single workflow and test, learn and scale.

• Include security in your process. We put scanners and policy checks even directly in our CI/CD pipelines. This spared us plenty of time and heartache rather than tacking on security later. Using AI code generators, how to lint and test generated code prior to production need not ever be considered.

• Maintain supervision. Team-to-team creation of Kubernetes clusters was creating a cost and risk explosion. AI assistants should have comparable regulations, access control, usage limits and audit log. And thus they do not end up running rampant or spilling sensitive information.

• Environmental change: Change people and work flows. New technology is the reason that there always are new manners of work. Similarly to how DevOps resulted in on-call duties and incidental retrospectives, there will be required, in the case of AI, to “keep humans in the loop” by “prompt reviews” or “output audits”.

TimesTech: What do incidents like Claude’s blackmail signs and o3’s sabotage suggest about AI’s unpredictability?

Aravind: These catastrophes demonstrate that we can lose control over what happens when we condition models to pursue goals: These models may become out of control. There is no place for a lone kill-switch. We must have numerous checks and balances, as well as human control.

TimesTech: Where are the biggest security gaps in current AI systems, and what should developers watch out for?

Aravind: 1. Prompt hijacking as a developer, you can create prompts that fool your AI into taking actions that it should not, such as data leakage or executing malicious code. You should never trust what users send in.

2. Data poisoning- In the event that it’s fine-tune data is available openly, an opponent might add harmful instances. Reclose your databases and involve privacy methods to ensure that bad data is restricted.

3. Unintentionally expansive entitlements -Lots of AI applications are executed with network or file-system access that they are not required to utilize. Treat them just like any other service: use containers, put up harsh firewalls and key rotation frequently.

4. Misplaced audit trails – You cannot troubleshoot in case of an error when there are no trails of sent prompts and received replies. Have tamper proof logs so that you can rewind and rectify.

TimesTech: How can open-source communities help make AI safer—and is open-sourcing powerful models a risk?

Aravind: The upside of openness

• Quicker bug hunting due to the opportunity of reviewing the code and identifying defects by all of them.

• Common methods and libraries to attack models with adversarial attacks.

• A vibrant vector of plug-ins and frameworks to safer prompts and outputs.

My take: publish designs of models and safety research including miniature standards. The largest weights should be kept under controlled licenses and usage control.

TimesTech: As AI becomes more embedded in dev tools, how do we draw the line between help and harmful manipulation?

Aravind: Helpful AI

• Automatically filling in code snippet or rote tests. You can see specifically what is added and you can adjust it.

Potential traps

• Refactor tips that rewrite huge sections of code without your permission and leave you with bugs that are difficult to detect.

• DepDelta will automatically update your dependencies and replace it with a malicious library (not intentionally though).

• Otherwise unseen discrimination in recommendations that direct you politically to some tools or vendors.

Good UX is anything that has a good (or clear) source of its suggestion, easy undoing and notification when it is about to change something big.

TimesTech: How can companies build security strategies that hold up as AI grows more autonomous?

Aravind: • Red and blue teaming over AI: Frequently practice the prompts that look similar to an attacker, and the misuse, and then have your AI defenses tested by your defenders.

• Defense in depth Protection: Policy-as-code (of the API rules), real time monitoring (of the strange results), human inspections of the risky activities.

• Playbooks of AI incidents: Create concrete steps to isolate a malfunctioning model, kill processes, restore to a safe version of the application and communication between teams.

• Cross-squad leadership: establish an AI council composed of security, product, legal, and ethics people so that no major AI feature can get shipped without that council approving it.

LEAVE A REPLY

Please enter your comment!
Please enter your name here