TikTok has exploded in popularity, growing from just 55 million global users in January 2018 to over 1 billion monthly active users today. The app allows users to create and share short videos set to music, often using fun filters and effects. As the platform has grown, TikTok has implemented artificial intelligence filtering to maintain a safe, positive environment and high-quality user experience.
TikTok uses AI to proactively filter inappropriate, dangerous, or low-quality content. Computer vision scans videos, while natural language processing analyzes text, captions, hashtags, and comments. The goal is to identify and remove content that violates TikTok’s community guidelines before users see it. This protects minors, eliminates dangerous misinformation, and surfaces more entertaining videos tailored to each user.
Inappropriate Content
TikTok uses AI to proactively detect and remove inappropriate content that violates its Community Guidelines, including nudity, pornography, and sexually explicit content. The platform’s algorithms are trained to identify exposed skin, sexual gestures, and other signals to flag content for review by human moderators. Additionally, TikTok uses computer vision technology to detect potential nudity or sexual activities.
Violent, gory, and self-harm content is also prohibited on TikTok. The AI filters can recognize weapons, blood, injuries, and dangerous behaviors indicating self-harm. Any content promoting or normalizing suicide or self-harm is swiftly removed.
Hate speech, bullying, harassment, and other abusive behaviors have no place on TikTok per its policies. The AI scans video and audio, analyzing linguistic cues and semantics to identify such inappropriate content. It also considers the intent and context of speech. Any content meant to incite harm against individuals or groups based on protected attributes is banned.
By leveraging AI to supplement human moderators, TikTok aims to keep its platform safe and positive for its extensive user base. The algorithms continue to improve through machine learning techniques and input from content review specialists.
Misinformation Spreading on TikTok
TikTok has faced criticism for enabling the spread of misinformation on its platform. A recent 2022 study from NewsGuard found that nearly 1 in 5 TikTok videos about prominent news topics contained misinformation (NY Times). This includes false claims, conspiracy theories, manipulated media, and scam content.
Videos containing political misinformation and false claims about elections or public figures often go viral quickly on TikTok before fact-checkers can debunk them. There is also rampant health misinformation, such as videos promoting false COVID-19 treatments or vaccine conspiracies (University of Illinois). TikTok’s algorithm seems to reward sensationalized content, allowing misleading videos to rapidly reach large audiences.
Scams and spam are also common problems, with accounts promoting fraudulent investment opportunities, fake cash giveaways, and phishing attempts. TikTok has policies prohibiting harmful misinformation, but enforcement is inconsistent. Critics say more needs to be done to curb the spread of falsehoods that can cause real-world harm.
Copyrighted Material
TikTok has strict policies in place to prevent users from uploading copyrighted content belonging to others without permission. This includes music, videos, and images that are not owned by the user posting them. According to TikTok’s Intellectual Property Policy, they do not allow any content that infringes copyright.
TikTok uses filtering technology to identify and remove videos containing copyrighted songs or audio tracks. If you use someone else’s song in your video without permission, it will likely get removed. There are exceptions for videos using songs fairly under doctrines like fair use.
Videos containing logos, trademarks, or brand names without authorization may also be taken down for copyright infringement. Users are advised to only upload original content they have the rights to share on TikTok.
If you find your copyrighted content posted on TikTok without your consent, you can file a copyright report through TikTok’s app or website. TikTok claims they have teams dedicated to reviewing these reports and taking down infringing content.
Minors’ Safety
TikTok aims to protect minors from seeing inappropriate adult or mature content on the platform. Their Community Guidelines state that users must be at least 18 years old to view or share mature content.
TikTok has strict policies against child sexual exploitation and grooming behaviors. Their Community Guidelines prohibit any content related to child sexual abuse, grooming, or exploitation. This includes predatory interactions, inappropriate messaging, and attempting to obtain personal information from minors.
To prevent predatory behavior, TikTok does not allow users to live stream with minors if they have been convicted of certain crimes. The platform also restricts interactions like commenting, dueting, and stitching for some younger accounts. TikTok’s safety teams proactively detect and remove accounts that engage in grooming or preying on minors.
In their Youth Safety and Well-Being policies, TikTok emphasizes their commitment to minor safety and creating an age-appropriate experience.
Adult Content Promotion
TikTok has strict policies prohibiting the promotion of adult content and services on the platform. According to TikTok’s Branded Content Policy, “You must not post content promoting illegal products or services. You must not promote products or services relating to any of the Prohibited Industries or Products” (TikTok Branded Content Policy).
Specifically, TikTok prohibits the promotion of pornography, escort services, dating sites, gambling services and products, and more. Their advertising policies state, “The promotion, sale, solicitation of, or facilitation of access to pornographic material, sex toys, and supplies such as lubricants, fetish, or sexual fantasy products are prohibited on TikTok Ads” (TikTok Advertising Policies).
Any accounts found promoting adult content or services may be permanently banned. TikTok aims to maintain a safe environment, especially for younger users, by cracking down on inappropriate or illegal promotions.
User Engagement Tactics
TikTok has policies discouraging creators from using tactics that artificially inflate engagement. This includes prohibiting content that begs for likes, comments, shares, or followers. According to TikTok’s Community Guidelines, creators should not “artificially increase view, like, or follower counts.”
TikTok also discourages clickbait and sensationalism. Their policies state that videos should not have “exaggerated, shocking, or extreme titles.” This helps maintain the authenticity and integrity of the platform.
Account Integrity
TikTok has struggled with issues related to account integrity, including fake accounts, bots, and inauthentic activity. The platform’s popularity has made it a target for account selling and password sharing as well. According to research, typical signs of an inauthentic account include:
- Generic username and profile photo
- No personal details in bio
- Reposts of viral content
- Minimal original content
- Follower-to-following ratio seems off
- Comments seem repetitive or spammy
TikTok uses both human moderators and AI to detect inauthentic accounts by analyzing behavior patterns and activity. The company claims to remove millions of fake accounts daily. However, the scale of TikTok’s user base makes it an ongoing battle. Users are advised to be vigilant in identifying bots and fake accounts themselves.
Moderation Methods
TikTok uses a combination of automated moderation technology and human moderators to filter content on the platform. According to TikTok’s Transparency Center, videos uploaded to TikTok are initially reviewed by automated moderation technology that aims to identify content that violates community guidelines before it is posted publicly on the platform (https://www.tiktok.com/transparency/en-us/content-moderation/). This allows TikTok to proactively filter some inappropriate content before it ever reaches users.
However, the automated systems are not perfect, and some objectionable content does get through. In these cases, TikTok relies on users to flag inappropriate videos for additional human review. Moderators will then review flagged content and make a judgement on whether to remove it or leave it up. According to TikTok, their goal is to identify and remove violative content as quickly as possible, though the volume of content makes this challenging (https://newsroom.tiktok.com/en-eu/evolving-our-approach-to-content-enforcement).
The combination of automated filtering before posting and human flagging of content after posting allows TikTok to moderate content at the massive scale required for a platform of its size and growth.
Conclusion
In summary, TikTok uses a variety of AI filters to enforce its community guidelines and promote user engagement. The main categories of content filtered by TikTok’s AI include inappropriate content, misinformation, copyrighted material, threats to minors’ safety, and adult content promotion. The platform relies on machine learning algorithms to analyze text, images, videos, audio, metadata, and user reports to detect policy violations. While content moderation at scale is a difficult challenge, TikTok’s AI filters have faced scrutiny for issues like racial bias, lack of transparency, and overly aggressive takedowns.
The filters designed to beautify users and enhance engagement have also proven controversial, as some argue they promote unrealistic beauty standards and social media addiction. TikTok claims its goal is to create an enjoyable community and that it continues working to improve its automated systems. However, many believe significantly more progress is needed to address concerns about biases, errors, and the impacts of AI technology on society. Ongoing transparency, accountability, and collaboration with outside experts would go a long way in building public trust. Content moderation remains an imperfect science, but with care, AI tools can help platforms like TikTok promote their community values while respecting free expression.