Artificial and Human Intelligence for Effective and Ethical Online Safeguarding

Our Solutions

Informed by decades of research and expertise, our products are proven to protect people, organisations and products from online threats

Real-Time Risk Detection, Crisis Management and Mitigation

With Human and Artificial Intelligence, we accurately detect and classify online harms across diverse online platforms in real-time, enabling proactive intervention and swift mitigation.

Counter-Measures

Based on world-leading research conducted within HateLab, our tool chest of counter-measures, including tailored and targeted counter-speech and proactive defence, are effective at reducing online harms and risks.

Social Media Safeguarding

Drawing on our founders' expertise in online harms and cyber risk, and our strategic partnership with HateLab, our social media safeguarding packages educate and arm your people and clients with the knowledge and tools needed to remain safe online.

HERO

Cutting-Edge Detection: Our Harms Evaluation & Response Observatory (HERO) deploys the latest AI algorithms developed within HateLab, to accurately detect and classify online harms across diverse online platforms in real-time. Our technology can effectively identify nuanced forms of threat, enabling proactive intervention and swift mitigation.

Contextual Understanding: HERO goes beyond mere keyword analysis by incorporating contextual understanding. Our AI models comprehend the subtleties and complexities of language, ensuring precise identification of online harms and reducing false positives.

Actionable Insights: HERO provides organisations, moderators, and marketing teams with actionable insights through an online dashboard and detailed reports that highlight trends, patterns, and the impact of online harms, enabling data-driven decision-making and targeted interventions.

Personalized Recommendations: HERO tailors its recommendations to the specific needs of each user. Individuals receive guidance on how to respond to online harms personally, while moderators gain valuable suggestions to enhance content moderation strategies. Marketing teams can leverage our recommendations to develop inclusive campaigns that counter online harms effectively.

Continuous Learning: HERO’s AI system continually learns and adapts by leveraging vast amounts of human annotated data via its partnership with HateLab. By staying up to date with emerging online harms and evolving language patterns, HERO ensures its detection and recommendation capabilities are always at the forefront.

Collaborative Approach: HERO promotes collaboration and community engagement. We facilitate dialogue between individuals, moderators, and marketing teams, fostering an environment where stakeholders can share insights, strategies, and success stories to collectively combat online harms.

Threats to People, Products & Brands

Child Safety

Self Harm

Drugs, Weapons, Violence

Malware

Mis & Disinformation

Hate Speech, Abuse, Violence

Frauds & Scams

What Our Customers Say

“nisien.ai were instrumental in providing the data foundation for our 2023 campaign. nisien.ai’s tracking of online harms allowed us to confidently talk about the levels of misogynistic online abuse received and reflecting that in reactive press, digital out of home and social. We worked with the team again on another piece of work oriented around tackling online homophobia.”

Group Head of major UK telecoms company

nisien.ai fundamentally changed the way we monitor the spread of online threats during national events… during live operations, we were quickly inundated with irrelevant information and failed to capture threats systematically, but nisien.ai technology solutions ensured the Hub could monitor unfolding crisis in a robust and reliable way.”

UK National policing lead for hate crime

Case Studies

Social Media

Social Media

A large social media platform utilised our expertise in online harms to inform their community guidelines to develop new strategies to mitigate online hate speech.

Government

Government

A UK government department deployed our safety tech to monitor anti-migrant content related to the settlement of Ukrainian refugees. HERO provided data that fed into national threat assessments on related extremist activity.

Global Luxury Retail

Global Luxury Retail

A global retailer used our social media safeguarding package to educate and arm reality stars contributing to a world-wide marketing campaign with the knowledge and tools to combat online risks post launch.

About Us

Professor Matt Williams

Professor Matt Williams

Founder & Chief Scientist

Matt is professor of criminology at Cardiff University and author of the book, ‘The Science of Hate: How prejudice becomes hate and what we can do to stop it’. He established HateLab in 2017 and founded its spin-out, nisien.ai, in 2023. He is top cited globally for hate crime and top 3 for hate speech. He is a consultant at TikTok working on trust & safety and counter-measures.

Professor Pete Burnap

Professor Pete Burnap

Founder & Chief of AI

Pete is professor of data science and cyber security at Cardiff University and was the AI & Cyber lead at Airbus for 6 years. He sits on the UK Government AI Council advising on AI policy related to national security, defence, data ethics, skills, and regulation, informing the National AI Strategy, and the AI Regulation White Paper. He also directs the Cyber Innovation Hub, a cybersecurity startup incubator.

Dean Doyle

Dean Doyle

Co-Founder & COO

Dean has spent 25 years in leadership programme management roles in the Home Office, College of Policing, NHS and Lloyds Banking Group. Experiencing hate crime first-hand led him to take up the role of Head of Delivery at HateLab. He continues this work as COO of nisien.ai.

Get In Touch

We will get back to you within 24 hours

    Scroll to Top