Skip to content

Is Character AI safe?

  • by
  • 6 min read

Character AI is a prominent AI chatbot platform enabling interactions with virtual characters inspired by real or fictional individuals. Users also possess the ability to craft and train their own AI characters with distinct personality traits, interests, and conversational styles.

Through Character AI, users can converse with many virtual characters hailing from realms of reality and fiction. Engaging AI celebrities like Ariana Grande, Nicki Minaj, and Billie Eilish is an option, alongside anime figures like Levi Ackerman, Armen Arlert, and Goku.

Additionally, users possess the creative freedom to construct and train their AI characters, endowing them with specific personality traits, preferences, and conversational mannerisms, elevating fanfiction to new heights.

The driving force behind Character AI is a neural language model that crafts text responses akin to human dialogue, delivering an authentic conversational experience to its user base.

But how safe is Character AI? This question warrants a detailed analysis. In this article, we’ll explain the privacy policy of Character AI so that you can assess the safety of this AI website yourself.

Also read: Is Telegram safe?


Character AI’s privacy policy explained

Whenever you want to assess a website’s or an application’s safety, you should start by looking at the privacy policy. In this section, we’ll explore what data Character AI collects and how it uses your data.

Upon registration, an existing email address and password are required. Verification of the email address and creation of a username follow suit. As per the company’s privacy policy, other information gathered by Character AI encompasses:

  • Personal information (your name and other contact details)
  • Log data (IP address, device nomenclature, etc.)
  • Website cookies (including persistent variants)
  • Usage choices (interaction patterns with the service)

The policy defines “Personal Information” as data that could potentially identify an individual. It explains the various ways in which Character collects personal information:

  • Personal information you provide: When you create an account or communicate with Character AI, such as through messages, your name, contact details, and message contents are collected (referred to as “Communication Information”).
  • Personal information through social media: Interaction with Character’s Social Media Pages may lead to the collection of personal information you choose to provide, like contact details (“Social Information”). Additionally, aggregate data and analytics about your engagement with these pages may be shared.
  • Personal information automatically received: When you use Character’s services, certain technical information is automatically collected, such as your IP address, browser type, visit details, device information, usage patterns, and cookies. This information is categorised as “Technical Information.”

Character AI uses cookies to operate, administer the site, gather usage data, and enhance user experiences. However, limiting cookies might impact certain aspects of the services and their functionality. You can opt out of cookies in browsers by following simple instructions.

Moreover, Character AI may monitor user-generated content to ensure it’s appropriate and doesn’t contain sensitive personal information or inappropriate content.

Character AI further integrates a Not Safe For Work (NSFW) filter, catering to user safety and appropriateness for those aged 13 and above (or 16 and above for European Union users). This filter bars characters from engaging in sexual or violent discourse and using profanity or hate speech. However, some users have discovered strategies to circumvent this filter through euphemisms, metaphors, or misspellings. While such actions may amuse certain users, they raise ethical and legal questions for the platform and its users.


How does Character AI use your data?

Character AI uses your data in the following ways:

  • To provide and administer access to the services
  • To inform you about features or aspects of the services they believe might be of interest to you
  • To respond to your enquiries, comments, feedback, or questions
  • To comply with legal obligations and legal processes

Character AI may share your personal information with the following third parties without giving prior notice to you:

  • Vendors and service providers
  • Business transfers
  • Legal requirements
  • Affiliates

Now that you have understood how Character AI collects and uses your data, you can clearly assess the safety of Character AI. In the next section, we’ll analyse gaps in the privacy policy.

Also read: Top 7 Storyblocks alternatives


Gaps in Character AI’s privacy policy

Data is the fuel that keeps the tech world running and was recently said to have come at par with the petroleum industry. There is no doubt that companies are focusing on collecting large quantities of data. Facebook, one of the tech titans, has gained a notorious reputation for collecting data without user consent. Governments, too, are diving into the data collection field. The prime example is the Aadhar Card initiative in India, which was rolled out by the BJP-led NDA government. Collection of huge data inevitably poses data security risks. There have been incidents of data leaks where user's personally identifiable information was exposed. Differential privacy provides a way for the companies to collect and share the data but without risking the personal information. For instance, a survey that aims to find out how many people, from the given set of 100 people who watch television, watch Netflix or Prime, will treat the answers as a data set instead of an individual -- keeping anonymity intact on paper. So, instead of analysing each individual from the set, we get an overall figure. The figure might look something like 70 out of 100 people watch say, Netflix. But the identity of those 70 people who watch Netflix or the remaining 30 that don't isn't revealed by the data set. Differential privacy uses algorithms for data anonymisation. In simple terms, differential privacy is a more robust and mathematically powerful definition of data privacy. Cynthia Dwork is credited to be the founder of this technique. According to a research paper authored by Cynthia Dwork and Aaron Roth, titled The Algorithmic Foundations of Differential Privacy, " Differential privacy describes a promise, made by a data holder, or curator, to a data subject [that] you will not be affected, adversely or otherwise, by allowing your data to be used in any study or analysis, no matter what other studies, data sets, or information sources. are available. " Need for Differential privacy Data privacy is paramount. Big data collection and analysis is necessary for companies to understand human behaviour and motivations in order to be able to judge consumer trends and market their products accordingly. Big data has a tremendous scope and the market for the same is exponentially growing, but so are the risks. In recent years, micro-data, which is essentially the information about the individual, is becoming public. Micro-data contains the most private and varied aspects of an individual and thus are most susceptible. Protection from linkage attacks In 2006, Netflix published a dataset of about 500,000 users, to support the Netflix prize data mining contest. In this contest, they randomised some data and hid the others. However, researchers were able to demonstrate that even the most anonymised data can be breached. For sparse datasets, such as that of Netflix, an attacker with the least technical knowledge can perform a data breach. The researchers juxtaposed the data from the IMDb over the Netflix datasets and found out the user's name. Thus solidifying the need for differential privacy. Conclusion irrespective of individual Differential privacy also solves the paradox of 'learning nothing about the individual while learning useful information about a population' by making the conclusion independent of the individual. In other words, it does not matter whether the individual opts out or stays in the study, the conclusions will remain the same nonetheless. In differential privacy, each output is likely to occur equally, irrespective of the presence of the participant. Ambiguous Queries Sometimes the queries, in itself are problematic and ambiguous. Suppose,  'A' has a known condition or a habit. There are two research questions about A's condition: How many people in a given dataset have the same condition or habit as A? How many people in the dataset, not having the name A, have the condition or habit? Thus, from the above example, it is clear that A's condition can be deduced easily unless differential privacy is applied. Also read: 9 new privacy and security features announced at Google I/O 2019 How does it work? Differential privacy is a prime tool to prevent differentiated attacks. Differentiated attacks follow the process given below. Let us consider a situation where we have to find out about who watches Netflix or Prime. In a data set of 100 people, an attacker wants to find out about Sunny's habits. He already knows that 70 out of 100 people watch Netflix. An attacker, by obtaining background information about other 99 people, can easily say that 30 watch Prime and 69 watch Netflix. Sunny, being the 100th member can only watch Netflix as 70 out of 100 people watch Netflix. Differential privacy guarantees protection against such attacks by inserting a random 'noise' to the data. Noise is a carefully designed mathematical function that gives the probability of the events occurring in an experiment. In the above situation, instead of using a 70/30 ratio, we can use odd ratios like 69/31. In this way, it is difficult to reach Sunny individually, but the overall ratio remains nearly the same. Thus, the noise adds additional algorithms to the whole process. They are particularly useful if there is data about some confidential habits. Some common noise mechanisms are Laplace distribution and Gaussian distribution. Challenges with differential privacy As we have seen, differential privacy protects the user's privacy while allowing the data aggregation. However, there are several limitations, which are as follows. Differential privacy works only when the aggregated data is extensive. For less extensive data, this technique is not much useful. This technique is not helpful where there is an unequal summation of data. For example, in a data aggregation about incomes, inclusion or exclusion of one individual can change the result considerably. The reason is that the incomes are unevenly distributed is that the top 20 percent earn exponentially higher than the rest of the 80 percent. Therefore, the exclusion of any top 20 percent member will affect the result. In queries involving a series of private questions, we have to add more noise to obfuscate the identity. More the queries, more the noise, which makes it difficult to derive anything useful from the data. In the garb of differential privacy, companies can now collect even more data from the users. Big giants like Apple, Google are already applying differential privacy for protecting user data. Apart from that, software companies like Privitar are also applying this method. Differential privacy has also found implications in cloud security. Thus, differential privacy is gaining momentum in recent years and is likely to continue along the upward trajectory in the foreseeable future. Also read: What is a Credential-based cyberattack?

After assessing the privacy policy of Character AI, we find missing any mention of encryption or data security. The Security section mentions that “We implement commercially reasonable technical, administrative, and organizational measures to protect Personal Information both online and offline from loss, misuse, and unauthorized access, disclosure, alteration, or destruction.” 

Moreover, they mention that “no Internet or e-mail transmission is ever fully secure or error-free. In particular, e-mails sent to or from us may not be secure. Therefore, you should take special care in deciding what information you provide to us via the Services or e-mail.” It looks like they have picked it up from OpenAI’s privacy policy, which states that “no Internet or email transmission is ever fully secure or error-free. In particular, emails sent to or from us may not be secure. Therefore, you should carefully decide what information you send us via the Service or email.”

This is one field where generative AI companies should take more concrete steps and establish secure encryption and data security measures.

Also read: ChatGPT character limit explained


How to safely use Character AI?

To traverse Character AI’s landscape safely and responsibly, adhere to the following best practices:

Exercise prudent privacy and identity management

While interacting with Character AI, judiciously manage the personal information shared with characters and the platform. Refrain from divulging sensitive or confidential details such as complete names, addresses, phone numbers, emails, passwords, banking credentials, or social security numbers.

Exercise caution when characters solicit such information or attempt to manipulate you into activities that make you uncomfortable. Should suspicions arise regarding a character’s authenticity or reliability, terminate the conversation and alert the platform.


Display respect for others’ privacy

When engaging with Character AI, uphold the privacy and identity of others, especially those whose likeness underpins the virtual characters. Abstain from employing Character AI for impersonation, harassment, or defamation online.

Similarly, shun the generation of characters that convey offensive, hateful, or detrimental messages targeting any individual or group. Encountering such characters warrants their reporting to the platform and avoiding further interaction.


Use your common sense

While interacting with Character AI, exercise a degree of scepticism concerning the information garnered from characters and the platform.

Avoid accepting or believing everything they convey, given that their responses might not consistently align with accuracy or dependability. Before accepting information as factual or true, corroborate it through alternative sources.

Additionally, refrain from leveraging Character AI to proliferate misinformation or false data online. Use Character AI as a conduit for entertainment and educational purposes without deploying it for political or social agendas.

In conclusion, to use Character AI safely, follow the abovementioned precautions, and you’re good to go.

Also read: How to fix Character AI chat error?

Kumar Hemant

Kumar Hemant

Deputy Editor at Candid.Technology. Hemant writes at the intersection of tech and culture and has a keen interest in science, social issues and international relations. You can contact him here: kumarhemant@pm.me

>