Separator

What do we do about data privacy in the age of AI?

Separator
Hailing from New Delhi, Praveer holds Bachelors in Business Administration in Marketing from Amity University. With a rich experience of over 20 years across diverse sectors such as retail and technology, Praveer embraces an extreme expertise in overseeing product, technology, and community.

What are the data security risks organizations take on when they adopt emerging AI technologies?

One of the most significant data security risks is the need to share sensitive data with AI models, particularly Large Language Models (LLMs) or Small Language Models (SLMs). These models often require access to vast amounts of data to function effectively. When organizations use cloud-based AI solutions, their data must be transmitted to, and processed by external servers, which can expose it to potential breaches, unauthorized access, or misuse like data poisoning.

Security risks are further heightened in multi-cloud environments, where publicly exposed data traversing the internet or residing on third-party servers, isn’t always under an organization's direct control, thereby increasing cyber risk. The solution is an end-to-end, on-premise solution that keeps data within an organization’s own secure infrastructure. On-premise AI systems allow for model inferencing and processing without the need to move data outside the organization's network, enabling greater control over data security and compliance with regulatory norms like the DPDP Act.

Is the onus of data security on companies that build AI solutions or organizations that adopt it?

Data security is a shared responsibility between AI solution providers and the organizations that implement them. AI models require inferencing and significant computational resources, which requires the handling of sensitive data during processing. Therefore, both parties must collaborate to ensure data protection.

The onus is on AI solution providers to build systems that are secure by design while also offering options for on-premise deployment. AI solution providers must also ensure that data is encrypted and protected during processing while being transparent about how data is used. It ensures that AI models aren’t retaining or improperly learning from proprietary data, where it could leak sensitive information.

On the other hand, organizations adopting AI solutions must assess the security features of the tools before integration, ensuring it aligns with their internal data protection policies and regulatory requirements. They must implement AI workflows that enhance their intellectual property (IP) without compromising sensitive data, and consider adopting full-stack AI solutions that offer greater control over data and model operations.

When both parties pay attention to data security, organizations can protect sensitive, business-critical information, thereby enhancing the organization's IP and competitive advantage.

Why are so many emerging AI solutions like generative AI lacking in data security?

Many emerging AI solutions, especially those involving generative AI, rely heavily on cloud-hosted models and LLM-based intelligence. In such a scenario, user data is shared with cloud service providers for processing. This creates four major challenges. Primarily, it creates cloud dependency. Cloud-based models require data to be sent over the internet and stored on third-party servers, which increases exposure to cyber threats and reduces control over data handling practices.

Additionally, the fast-paced evolution of AI technologies prioritizes functionality and innovation over security, leading to solutions that don’t prioritize data protection. Adding to the mix is the complexity of AI models. Advanced AI models are black boxes, making it difficult to ensure they handle data securely and comply with privacy regulations. In smaller companies like startups developing AI solutions, the lack of resources to implement robust security measures makes data security a challenge.

To address these security gaps, organizations should seek AI solutions that prioritize data protection, like those offering on-premise deployment options. It ensures that any data shared with AI models is adequately protected through encryption and strict access controls.

How can AI operating systems be built with security and privacy in mind?

Building AI operating systems with security and privacy at the forefront requires adherence to six key pillars. First, it’s on-prem deployment. This ensures that AI systems reside within the organization's own infrastructure, offering greater control over data flow. These solutions reduce reliance on the public cloud, minimizing the risk of data breaches during transmission or storage. Second is data sovereignty, which is essential to complying with local regulations and ensuring data resides within prescribed geographical boundaries. Third is securing model interfacing. This involves designing AI models that can perform inferencing without exposing raw data. Techniques like federated learning allow models to learn from data without transferring it to a central location.

The fourth pillar is establishing feedback loops for continuous learning. While developing AI models, it’s important to install mechanisms where the intelligence generated from AI processes feedback into the organization's own systems. This ensures continuous improvement of internal models and the building of proprietary IP based on the organization's data. Fifth is using privacy-preserving technologies. Businesses leverage differential privacy and homomorphic encryption to protect individual data, while still making way for meaningful data analysis. The final pillar is installing robust access controls. Implementing least-privilege, strict authentication and authorization protocols ensure that only the right people can access sensitive data and AI functionalities.

By incorporating these elements, organizations can build AI operating systems that not only deliver powerful insights but also uphold the highest standards of security and privacy.

What steps must C-level leaders follow while adopting AI and establishing AI governance policies?

C-level leaders should approach AI adoption strategically with a focus on long-term value and security. A long-term vision requires the development of a comprehensive AI strategy that aligns with the organization's long-term goals. C-level leaders can begin with understanding and mapping out how AI can drive innovation, efficiency, and competitive advantage over time. This paves the way to install mechanisms that offer control over automation and processes, such as end-to-end oversight, offering complete control over AI automation and agentic processes. This includes overseeing data inputs, model operations, and outputs. Organizations can also benefit from investing in building internal AI expertise and infrastructure to reduce reliance on external vendors and enhance control over AI initiatives.

Organizations can also use internal data to train AI models, thereby creating unique solutions tailored to specific business needs. This must be in lockstep with policies that safeguard intellectual property generated by AI systems, ensuring that it remains a strategic asset. Once organizations have a clear plan for AI implementation, they must establish effective AI governance frameworks, including policies that define acceptable use, ethical considerations, and compliance requirements for AI technologies, and the identification of potential risks associated with AI adoption, enabling effective mitigation strategies.

AI governance policies must be crafted with cross-functional collaboration. C-level leaders must involve various departments such as IT, legal and compliance in AI governance to ensure policies are holistic. This can only be implemented if employees are trained on how to use AI tools, and follow data security practices. By following these steps, C-level leaders can foster an environment where AI technologies are adopted responsibly, securely, and in a way that contributes to the organization's sustainable growth.

With so many products in the market, how do we identify those that adhere to compliance norms like the DPDP Act?

Identifying AI products that comply with regulations such as the Data Protection and Data Privacy (DPDP) Act involves several proactive measures. Look for products that have been certified by recognized bodies or have compliance attestations indicating adherence to relevant data protection laws. Check if the product follows industry standards such as ISO/IEC 27001 for information security management.

Safeguarding data privacy in the age of AI is a severe technical challenge and an ethical imperative. Thus, you need to ensure technology acts as a catalyst for sustaining humanity's sanctity


Choose vendors that are transparent about their data handling practices, including data collection, processing, storage, and deletion policies. Ensure that the product allows for data to be stored and processed within required jurisdictions as per legal obligations.

Assess the product's security measures, such as encryption protocols, access controls, and audit trails.

Determine whether the product supports on-premise deployment to maintain greater control over data. Research the vendor's history regarding compliance, security incidents, and responsiveness to regulatory changes. Seek testimonials or case studies from other organizations in similar industries that have successfully implemented the product in a compliant manner.

Involve legal and compliance teams in the procurement process to review contracts, terms of service, and data processing agreements. Consider conducting a privacy impact assessment (PIA) to evaluate potential risks associated with the product.

Recognize that the industry is rapidly evolving, with new tools, models, and interfaces emerging regularly. Stay updated on the development of new compliance mechanisms and frameworks designed to assess AI tools and applications.

By taking these steps, organizations can more effectively navigate the crowded AI marketplace and select products that not only meet their functional needs but also uphold the necessary standards of data protection and privacy.