The deep-fake controversy involving Indian celebrities highlights the urgent need for AI regulations and safeguards, as these technological advancements pose significant risks, influencing the demand for legal recourse, vigilance, and the development of AI-based solutions to combat such threats.
Technological development, particularly Artificial Intelligence (AI), often revolves around its perceived pros and cons, often fueled by sci-fi’s robot-dominated narrative. However, it’s crucial to acknowledge the broader implications of AI and the divide between those willing and not willing to engage in AI-related discussions, highlighting the complexity of this technology.
Caution and attention; Deep fake
We are talking about a tiny subset of AI: human abuse of generative AI for nefarious purposes. Since before the phrase “Dark AI” became popular, the Indian government has been proactive in addressing AI-related challenges, and new strategies to counter it are being created every nanosecond worldwide. However, the current state of affairs for victims of deepfakes in India?
Let’s say a deepfake video was posted online that included your image. Pundits urge you to report the message to the social media sites as soon as possible. These platforms are legally required to handle complaints about cybercrime and, in this case, remove the offending content within 36 hours. The Union Minister for Electronics, Information Technology, and Communications, Ashwini Vaishnaw, convened a high-level conference with professors at the forefront of AI research and social media platforms to explore strategies for combating deepfakes.
Law enthusiasts and internet users are seeking legal recourse for deepfakes and AI-related crimes. They recommend filing a complaint with the National Cyber Crime Helpline — 1930 and seeking legal assistance from cyber lawyers. For intimate cases, online forums like stopncii.org offer privacy protection. Proper legislation is awaited.
Using AI to deal with AI
AI models are being developed to counter Dark AI activities and prevent misuse of social media images. Open-source tools like Nightshade can tweak digital artwork to make it difficult for AI models to train themselves. These tools alert consumers when they come across AI-altered media, such as Intel’s deepfake detector called FakeCatcher. This device measures light absorption or reflection by blood vessels, detecting deepfakes and preventing misuse of digital artwork.
Zara Patel and Rashmika Mandanna were victims of the deepfake controversy, highlighting the need for notable measures to ensure transparency in AI use. The Coalition for Content Provenance and Authenticity (C2PA) is an open technical standard created by software companies to authenticate digital pictures. However, actors and social media influencers, particularly women, face the brunt of cyber crimes without a support system to guide them.
PM Modi Expresses Serious Concerns On AI Deepfakes
Narendra Modi, Prime Minister of India, has expressed concerns about the misuse of artificial intelligence in creating deceptive ‘deepfake’ content. Speaking at the BJP’s Diwali Milan event, Modi emphasized the need for media to educate the public about the potential crisis associated with deepfakes. He reaffirmed his commitment to transforming India into a ‘Viksit Bharat’, stating that this vision is not just rhetoric but a tangible reality.
PM Modi emphasized the widespread support for the ‘vocal for local’ initiative and India’s progress during the COVID-19 pandemic. He referenced a viral video featuring Rashmika Mandanna singing, despite the fact that the video was circulated by those who believed it was real. This comes amid a recent controversy involving deepfake videos featuring Mandanna and Katrina Kaif.
Platforms receive guidance from the government on countering; Deep fake
The Indian government has advised major social media platforms to remove any deepfake content reported by users within 36 hours, or risk losing’safe harbor immunity’ and being subject to criminal and judicial proceedings. In a statement on Tuesday, the Ministry of Electronics and IT urged significant social media intermediaries to exercise due diligence and make reasonable efforts to identify misinformation and deepfakes, particularly those violating rules and regulations. Such cases will be promptly actioned within the IT Rules 2021 timeframe.
The government has urged users to avoid hosting information, content, or deep fakes and remove such content within 36 hours of reporting. Intermediaries are reminded that failure to comply with the IT Act and Rules could result in Rule 7 of the IT Rules, 2021, and loss of protection under Section 79(1) of the Information Technology Act, 2000. Union Minister of State for Electronics and IT Rajeev Chandrasekhar urged people affected by deepfakes to file police complaints and seek remedial measures under the IT Act, which provides for jail time and financial penalties against miscreants.
Comments 1