Social media platforms like TikTok and Instagram have exploded in popularity among teenagers and young adults. However, concerns around safety, privacy, and mental health have left many parents worried about the dangers these apps may pose. With over 1 billion monthly active users on Instagram and over 1.5 billion on TikTok, understanding the risks is crucial.
In this article, we’ll analyze key safety issues on TikTok and Instagram to determine which platform might be safer for young users. Looking at factors like data collection policies, exposure to inappropriate content, cyberbullying, and platform safeguards, we’ll provide an evidence-based comparison of the two apps. Our goal is to empower readers with research-backed insights to make informed decisions about social media use.
User Demographics
There are distinct differences in the demographics of TikTok and Instagram users. According to LitCommerce, TikTok tends to attract younger users, with 41% of its users between the ages of 16 and 24. In contrast, only 31% of Instagram’s users fall into the same age bracket. TikTok’s next largest demographic is the 25-34 age group, representing about 29% of users. Meanwhile, Instagram sees higher usage among those aged 25-34 at 32%, and 35-44 at 14%.
The different age makeups of the two platforms’ audiences leads to divergent trends and content styles. TikTok caters more to youth culture and viral challenges, while Instagram has an older audience interested in lifestyle, influencers, and aesthetics. Brands must understand these demographic splits to create content tailored for each platform’s primary users.
Data Privacy
The way TikTok and Instagram handle user data has some differences. On TikTok, user data like location, contacts, and interests are collected to personalize content and target ads. According to research, TikTok gathers more personal data than other social media apps (CNBC, 2022). TikTok’s privacy policy says they collect location data within 3 square km, more precise than Instagram’s 5 square km (The Conversation, 2023).
Instagram also collects user data for advertising and recommendations. However, Instagram states they don’t sell data to third parties. Both platforms say users can control privacy settings, but default settings allow extensive data gathering. While TikTok appears to collect more granular user data, some argue Instagram’s parent company Meta still utilizes data across platforms for targeted ads.
Predators and Stranger Danger
Both TikTok and Instagram carry risks of predators preying on minors. On social media, it’s easy for adults to disguise their identity and age to connect with teens and children. According to the National Society for the Prevention of Cruelty to Children (NSPCC), there were over 5,000 grooming offenses recorded by police in the UK in 2019-2020 (NIDirect).
Predators often use tactics like catfishing and grooming to build trust and an emotional connection with a child. This can start off friendly but quickly turn inappropriate. According to the Mayo Clinic, predators may ask minors for explicit photos, attempt to isolate them from family and friends, or try to arrange an in-person meeting (Mayo Clinic).
On both TikTok and Instagram, privacy settings allow users to restrict messages and comments from strangers. However, predators find ways around these restrictions. Parents should have open conversations with kids about online stranger danger and monitor their social media activity.
Mental Health
Social media use, especially frequent use, has been associated with increased risks for mental health issues like depression, anxiety, low self-esteem, and sleep problems in teens and young adults (Can Social Media Affect Mental Health? Find Out The …). One key mechanism is social comparison. Comparing oneself negatively to the carefully curated posts of others can lead to feelings of inadequacy and lowered self-worth (Exploring the Impact of Social Media on Mental Health from …). The constant pressure to portray an idealized life online can also take a toll.
In addition, some research suggests social media may be addictive, making it hard for teens and young adults to disengage. The fear of missing out (FOMO) and desire for likes/comments can drive compulsive checking habits. This distraction and obsession with social media may displace time for healthy activities and in-person socialization. Setting reasonable limits on usage can help mitigate these risks.
Cyberbullying
Cyberbullying is a significant issue on social media platforms like TikTok and Instagram, where users can anonymously target and harass others. According to research, 22% of children have experienced bullying on social media (Study Shows That 22% Of Kids Are Bullied on Social Media). Cyberbullying tactics include sending threatening messages, publicly shaming or humiliating someone, spreading rumors, and excluding people from online groups.
The anonymity afforded by social media emboldens cyberbullies, allowing them to say things online they wouldn’t in person. The harassment can be relentless and inescapable, following victims anywhere they have an online presence. Cyberbullying has been linked to anxiety, depression, and even suicide in severe cases. According to statistics, over 5 million children skip school each year because of bullying, underscoring the detrimental impacts on victims’ mental health and education (Cyberbullying: Twenty Crucial Statistics for 2024).
While TikTok and Instagram have reporting features to flag abusive content, social platforms still struggle with properly detecting and removing cyberbullying. Parents should educate children on online safety strategies, monitor their social media activity, and provide emotional support if cyberbullying occurs. Organizations like Stop Bullying Now and Kids Helpline also provide anti-bullying resources and counseling hotlines for affected youth.
Violent/Explicit Content
Both platforms struggle with moderating violent and explicit content, though their approaches differ. On Instagram, violent and explicit content is against community guidelines and can be reported by users. Instagram utilizes AI and human moderators to review flagged content and disable accounts that repeatedly violate policies (citation 1). However, critics argue Instagram’s efforts have been insufficient, with violent/explicit content still slipping through the cracks (citation 2).
On TikTok, the app’s community guidelines similarly prohibit violent and explicit content. TikTok claims its AI proactively detects and removes the vast majority of policy-violating content before it’s reported. However, investigations have revealed flaws in TikTok’s enforcement, with some violent/explicit videos amassing millions of views before removal. TikTok has pledged to improve, noting moderation is an “industry-wide challenge” (citation 3).
Overall, while both platforms officially ban violent/explicit content, cracks in enforcement remain. Instagram’s reporting system places more burden on users, while TikTok’s AI aims to preemptively catch violations. However, human moderators and AI systems on both platforms imperfectly moderate this challenging content.
Misinformation
The spread of false information or misinformation is a major concern on social media platforms like TikTok and Instagram. According to a 2022 literature review, misinformation spreads quickly on social media through retweets, shares, and algorithmic promotion (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8853081/). The COVID-19 pandemic provided many examples of viral health misinformation on sites like TikTok, which can negatively impact public health efforts (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9533200/). An analysis of large-scale social networks found true information and misinformation often spread differentially, with falsehoods spreading farther and faster in many cases (https://www.hindawi.com/journals/scn/2021/7999760/). Fact-checking content and thinking critically before sharing is important to limit the spread of misinformation online.
Account Security
Social media platforms like TikTok and Instagram come with inherent risks of account hacking and identity theft. As the platforms grow in popularity, they become bigger targets for cybercriminals looking to steal personal information or hijack accounts. Recent research indicates there has been a rise in hacking attempts and scams on TikTok, often targeting younger users who may be less security-conscious.
One major risk is password compromise through phishing. Scammers may send links pretending to be from TikTok, tricking users into inputting their login credentials on fake pages. Once hackers gain access, they can lock out the real account owner, change profile information, and post inappropriate content. TikTok accounts are also targets for spreading malware through fake links.
Another threat is SIM swapping, where scammers take over someone’s phone number to reset passwords and bypass two-factor authentication. TikTok accounts connected to compromised phone numbers can easily be stolen. Experts recommend users be vigilant about suspicious login activity and use strong unique passwords to reduce hacking risks. Enabling two-factor authentication is also advised.
Identity theft is another concern, as hacked accounts can reveal names, photos, and other personal details criminals can exploit. Children are especially vulnerable if privacy settings aren’t managed properly. Monitoring kids’ security settings and reviewing shared information should be a priority.
Overall, while TikTok presents growing account security challenges, being cautious about links, using strong passwords, enabling two-factor authentication, and limiting personal information shared can help reduce risks of hacking and identity theft.
Conclusion
In summary, both TikTok and Instagram present risks around data privacy, predators, cyberbullying, and exposure to inappropriate content. However, TikTok’s lower age limit of 13 makes it inherently riskier for younger users. Parents should enable all privacy settings, monitor their kids’ accounts and usage, have open discussions about potential dangers, and set reasonable limits on screen time. With proper precautions, social media can be enjoyed safely. The key is education, communication, and parental oversight rather than an outright ban.
Social media opens up new avenues for self-expression, creativity and connection. But it also creates new challenges for protecting privacy and mental health, especially for impressionable young users. By taking an active role and having ongoing conversations, parents can help minors safely navigate these platforms.