Covid-19 has been the biggest disruptor of modern times, upending nearly every aspect of our lives. It’s led to an inevitable surge in technology adoption as life has become digital by default – and we’re all spending more time on devices as a result. But, while this has helped keep people connected and led to an explosion of digital innovation, there is also a darker side to our online addiction.
Cyberbullying has seen an alarming increase with young people often bearing the brunt. Recent studies found a 70 percent increase in bullying and abusive language among kids and teens on social media and chat forums and a 40 percent increase in toxicity on gaming platforms.
7 in 10 young people experience online bullying before they hit the age of 18 and with more teen suicides being attributed in some way to cyberbullying than ever before, it’s no exaggeration to say we are in the midst of a cyberbullying crisis.
Behind these terrible statistics of bullying through harassment, impersonation, outing, trickery, exclusion, and cyber threats, are stories of young people withdrawing from family, spending time alone, and seeing their self-esteem plummet. One click can ruin lives and be the trigger for suicidal behaviors, self-harming, and eating disorders.
Cyberbullying affects everyone in different ways. But UNICEF argues that it “can feel as if you’re being attacked everywhere, even inside your own home. It can seem like there’s no escape.” It’s now recognized as being frequently more harmful than traditional bullying because victims can be reached anywhere, at any time, and exposed to a much wider audience.
One of the biggest challenges for digital platforms is that the Internet is often seen as an enabler for aggressive, bullying behavior. By affording people anonymity and the opportunity to hide behind pseudonyms and fake profiles, bullies fear no comeback or repercussions for their behavior. Unlike traditional face-to-face aggression, bullies are unable to see the consequences of their actions and consequently feel less remorse. They hide in the shadows and don’t feel they will be caught.
For too long this has been tolerated and contained, instead of being recognized as a poison that needs to be eliminated. Now there can be no letting up to make sure platforms are safe spaces for healthy engagement. Unless you build in trust and safety, you put your users and reputation at risk of harm. Malicious content and bad actors can destroy your brand.
It’s become a truism that trust is one of the easiest things to lose and one of the most difficult to win back. With global research showing that technology is no longer the most trusted sector and that concerns around the societal impact of innovation are fuelling distrust, demand for content moderation is growing.
Every day, 2.5 quintillion bytes of data are created for good or for ill, and measures to protect the public are no longer nice to have optional extra, they’re absolutely essential.
At Enshored, we use leading-edge technology solutions, supporting ever-evolving policies and frameworks, to help identify harmful content and protect users 24/7. This is driven by a culture of continuous improvement and we have daily pre-shift huddles to focus on goals and what learning we can take from the previous shift. The content moderation landscape is a fast-evolving one with ever-changing priorities and new legislation to be mindful of, so it’s a fluid and fast learning environment. The one constant is that we are passionately committed to keeping people safe and protecting your brand.
The golden thread that runs through our content moderation team is to prevent lies, discrimination, and tragedies. This reflects an enormous sense of social purpose across the company, as we are driven to not only identify, flag, and remove inappropriate content but help users cross the trust gap and make platforms safe for all.
But it’s not just a vocational commitment to rooting out toxic content that gives us our edge. Combining the very best people with the latest advances in AI is helping us filter content faster and more precisely to meet increased obligations. This combination of mission-driven teams and the latest AI architectures help us keep one step ahead of cyberbullies.
In a recent training program, our newest members of staff shared their thoughts on what made the job so rewarding, as well as challenging. “For me, the importance of each individual decision that we make as content moderators is mirrored in the fact that we are able to help our users enjoy the platform without being exposed to the potential stress and harm of malicious content,” explained Joyce Ann Septimo.
“We aim to be a line of defense for the victims of hate, abuse, and disinformation on the net,” added Delfin Asuncion. “The content moderator role is tough,” Maria Vanessa Yago went on to explain. “You need a strong stomach to get it done. That’s why I consider content moderators as invisible heroes of the internet – we are the ones who help sanitize the internet to make it safer and enjoyable for all users by filtering the most dangerous and harmful content being posted.”
Like Maria, we believe our content moderators are invisible heroes of the internet. We know it’s a demanding job, which is why we take employee wellbeing seriously. We have a supportive, caring culture, working closely with all staff to ensure everyone feels they belong.
We look after our people so they can look after us all.
Anticipating growth?
Access the tools, tech & team you need to scale globally.
Serious about scaling?
One call is all it takes to know if we’re a fit.
© 2024 Enshored · Privacy · GDPR · California · Cookies