TikTok Expands Use of Artificial Intelligence for Content Moderation, Cuts Hundreds of Jobs


In a move that reflects the rapid shift toward reliance on artificial intelligence technologies, TikTok has announced the expansion of its use of AI to monitor content on the platform and detect harmful material — a shift that is beginning to directly impact the roles of human content moderators.

According to a report published by The Times, TikTok sent an internal email to its content moderation and quality assurance teams at its London office, informing them that their roles had been terminated as part of a broader restructuring plan aimed at reducing reliance on human reviewers.

The email further stated that ByteDance, TikTok’s parent company, plans to cut hundreds of jobs across the United Kingdom and Southeast Asia, amid growing dependence on deep learning models and advanced AI technologies, which now play a central role in detecting policy violations and inappropriate content.

The report also noted that TikTok may outsource content monitoring and quality control to third-party providers, in a bid to reduce costs and enhance operational efficiency through technological solutions.

Despite these measures, TikTok emphasized that its new direction would not impact its ongoing efforts to expand its workforce in the United States. The company reaffirmed its commitment to user safety, stating that dedicated safety teams will remain operational in the UK.

This development comes at a time when the tech industry is undergoing significant transformations through the adoption of AI — particularly in areas such as content supervision, misinformation control, and the fight against hate speech.

Post a Comment

Previous Post Next Post

Contact Form