In August 2024, WhatsApp banned 84.58 lakh accounts in India for violating its privacy policies, as reported in its monthly transparency report. This action aligns with the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, necessitating platforms to disclose content moderation practices.
WhatsApp’s automated detection systems proactively identified 16.61 lakh accounts for suspicious activities like bulk messaging, which often indicate potential scams or abuses, without any prior user complaints.
Also read: Cochin Shipyard Shares Drop Amid 5% Stake Sale at ₹1,540
During this period, WhatsApp handled 10,707 complaints covering issues from ban appeals to safety concerns, with actions taken on 93 of these grievances. Users reached out via email and postal services to the platform’s India Grievance Officer.
WhatsApp employs a robust abuse detection mechanism that works at various stages of an account’s lifecycle to prevent harmful behaviour before it escalates. This includes checks during registration, while messaging, and upon receiving negative feedback.
Also read: Gold Drops ₹220, Silver Falls ₹100 Ahead of Dhanteras
The platform typically bans accounts for sending spam, engaging in unauthorised bulk messaging, or other illegal activities under Indian law. Complaints about abuse or inappropriate behaviour also lead to account suspensions.
Under the IT Rules of 2021, digital platforms with over 50 lakh users must publish a monthly report detailing user complaints and the responses to these issues, ensuring transparency and accountability in digital interactions.