Speaking at the AI Impact Summit in New Delhi, Vaishnaw said that “innovation without trust is a liability”, and the Centre is developing stringent rules to require watermarking and clear labelling of AI-generated content to safeguard the “authenticity” of human creativity.
Vaishnaw further added that the misinformation, disinformation, and deepfakes are attacking the foundation of society, and it is the responsibility of social media platforms, AI models and creators to ensure that these tools are not misused.
“It is attacking the trust between the institutions of family, of social identities, of governance. It is striking at the root of these institutions and trust. The social media platforms, the AI models and creators, all of us will have to take the responsibility in making sure that new technology strengthens the trust rather than belittling it and creating a break-up of institutions without a break-up,” the minister said.
“Freedom of speech itself relies on trust, and that trust must be protected,” Vaishnaw said, adding that deepfakes and data breaches should be “non-negotiable” for the entire country and society.
The remarks come almost a week after the Centre directed social media platforms to put in place systems to identify and regulate AI-generated content. It ordered the platforms to ensure that AI-generated material was clearly labelled and carried identifiers indicating that it was synthetically created. The steps were taken to prevent material that is illegal, sexually exploitative or misleading.
(With inputs from ANI)