TikTok Launches New Tools to Help Protect Users from Potentially Offensive and Harmful Content
Amid various investigations into how it protects (or doesn’t) younger users, TikTok has announced a new set of filters and options to provide more ways to limit unwanted exposure in the app.
First off, TikTok has launched a new way for users to automatically filter out videos that include words or hashtags that they don’t want to see in their feed.
As you can see in this example, now, you can block specific hashtags via the ‘Details’ tab when you action a clip. So if you don’t want to see any more videos tagged #icecream, for whatever reason (weird example TikTok folk), now you can indicate that in your settings, while you can also block content containing chosen key terms within the description.
Which is not perfect, as the system doesn’t detect the actual content, just what people have manually entered in their description notes. So if you had a phobia of ice cream, there’s still a chance that you might be exposed to disturbing vision in the app, but it does provide another means to manage your experience in a new way.
TikTok says that the option will be available to all users ‘within the coming weeks’.
TikTok’s also expanding its limits on content exposure relating to potentially harmful topics, like dieting, extreme fitness, and sadness, among others.
Last December, TikTok launched a new series of tests to investigate how it might be able to reduce the potentially harmful impacts of algorithm amplification, by limiting the amount of videos in certain, sensitive categories that are highlighted in user ‘For You’ Feeds.
It’s now moving to the next stage of this project.
As explained by TikTok:
“As a result of our tests, we’ve improved the viewing experience so that viewers now see fewer videos about these topics at a time. We’re still iterating on this work given the nuances involved. For example, some types of content may have both encouraging and sad themes, such as disordered eating recovery content.” Read More...