Google has recently come under scrutiny following the revelation in a support document that its Gemini Chatbot app stores user data online by default, raising significant security questions. This applies to users across the web, on iOS, and Android platforms.
The support page disclosed that conversations reviewed or annotated by human reviewers, along with related data such as language, device type, and location information, are stored separately and not deleted even when users delete their Gemini Apps activity. This data can be retained for up to three years. The purpose of this practice is to enhance the service provided by Gemini's AI apps.
However, concerns have been raised regarding the control Google exercises over the storage of Gemini-relevant data and how it's stored in the database. Users and critics alike are troubled by the lack of clarity on how annotators improve the service and the potential implications for data security.
Google suggests that users can prevent their chats from being accessed or stored online by disabling Gemini apps through the My Activity tab. Additionally, individual chats or prompts done through Gemini can be deleted via the Apps Activity screen online.
Despite these measures, chats conducted through Gemini may still be stored in the database for 72 hours, aimed at ensuring a secure user experience and improving app functionality. Google is now issuing warnings to users against adding sensitive or pirated data into chats to prevent exposure and facilitate machine learning technology.
While Google's data retention policies for GenAI collection are not significantly different from its rivals, concerns persist about balancing privacy with the need for AI model development and improvement. Similar instances of liberal data retention policies have previously drawn regulatory scrutiny, as seen with OpenAI practices.
In response to growing concerns, some companies have implemented restrictions or outright bans on certain types of data being used in Generative AI tools. However, challenges remain in safeguarding user privacy as AI tools continue to evolve and proliferate across industries.
Efforts to address these challenges are underway, with companies increasingly recognizing the importance of responsible data usage and the need for robust privacy measures in AI development.
Tags
tech