AI in the hands of stalkers, abusers, and traffickers: a new frontier in victims’ rights

AI and Tech-Facilitated Abuse

As advancements in machine learning and natural language processing technologies accelerate by the day, the weaponization of AI against a targeted individual has gone mainstream.  

The ways in which technology is impacting our lives is evolving on an almost hourly basis, and AI is at the forefront of this. While it has the potential to increase efficiency and make information more widely available, it also presents new risks, including the facilitation of intimate partner violence, child exploitation and harassment.

 

AI as a tool of Stalking

As a surveillance tool, AI could enable offenders to track and monitor their victims with greater ease and precision than ever before. AI-powered algorithms could, for example, analyze and predict a person’s movements by gathering data from an array of sources: social media posts, geotagged photos, etc., to approximate or even anticipate a victim’s location.

Advanced facial recognition technology powered by AI is far more effective than humans at identifying individuals from images or videos; even when the quality is low or the person is partially obscured. Stalkers could track victims in real-time through surveillance cameras, social media, or other online sources. Those with access to these databases, for example members of law enforcement, could exploit them.

AI-powered software can analyze vast amounts of data in the blink of an eye, enabling stalkers to surveil their victims’ online activities. By monitoring their victims’ digital footprint, from browsing history to emails to downloads, abusers could gain insights into their daily lives and use this information to manipulate, control, coerce, or blackmail them.

AI could even automate and scale the process of manipulation by tracking interactions, identifying patterns in posting behavior, and even analyzing the sentiment of a victim’s communication. This could be used by interpersonal abusers, or even scammers looking for a target.

 

Impersonation

AI-powered tools can create convincing impersonations of people through voice synthesis and text generation. An offender could use AI tools to pose as a victim in order to endanger or frame them. Or an abuser could pose as someone the victim trusts in order to get access to them, gather information about them, isolate them, or manipulate their personal or professional relationships. Offenders could use AI-powered tools to fabricate text messages or emails that appear to come from trusted sources, and use that access to threaten or deceive victims, or isolate them from their support network.

An abuser could also generate or manipulate digital evidence to frame a victim for a crime.

In the last few weeks, stories have surfaced of scammers using artificial intelligence to sound like a family member in distress to con people out thousands of dollars. An abuser or scammer could use similar techniques to convince a victim to send intimate images which would then be used to sextort or exploit them.

 

AI and image-based abuse

Have you tried using AI to create fake headshots for your LinkedIn profile? Or maybe you saw the pic of the Pope in a puffer jacket? What about the images of Trump being arrested, a week before he was actually arrested? AI-powered image-generating systems are now enabling even the most tech-inept among us to manufacture photos and videos that are almost impossible to detect as fake. Deepfake technology uses deep learning algorithms (which are designed to learn from data to improve their own performance) to create convincing fake images and video. Almost as soon as deepfake technology was born, it was weaponized against women to create or mimic non-consensual pornography. Abusers already destroy lives by distributing intimate images or videos that were shared with the expectation of privacy (so-called ‘revenge porn’). Thanks to AI, abusers can carry out image-based abuse without ever having to receive an intimate image; they can just create one.

 

Child Sexual Abuse/Exploitation and Trafficking

Technology will likely mainstream photorealistic animated AI-powered avatars in the near to medium term.  Voices are basically already there (see impersonation above). Sex traffickers could use these to recruit victims. Currently, C.A.Goldberg, PLLC, regularly sees cases of adult predators impersonating teens to target minors for grooming, sexual abuse, and trafficking. As kids and teens may be less inclined to perceive a peer as dangerous, predators often pose as such, gaining information on social media about where a kid goes to school or what their interests are to create a convincing backstory. It’s about to get exponentially easier for predators to access and groom minors in this way.

 

Doxxing

AI is going to make it far easier for bad actors to create and deploy “bots”. The technical barriers are dropping in real-time, and soon bad actors will be able to mobilize an army of avatars posting on social media at their instruction. This tech currently exists, but most of the general population don’t know how to weaponize it. That won’t be true much longer.

Secondly, AI search is going to make it easier than ever before for bad actors to find sensitive information. These two things – better, more relevant info, plus the ability to create seemingly organic mobs, might contribute to doxxing getting more regular, intense, and harmful.

This ability to deploy bots, or “intelligent bots” could also have a role in stalking and harassment, for example a stalker or harasser could generate bots to send emails to an employer or even fake a voice to make an angry call.

Social media platforms will likely play a central role in perpetuating AI-facilitated harms by providing a platform for, and amplifying efforts to, dox victims of targeted harassment campaigns.

 

The Impact of AI-Facilitated Harms on Victims

There is no ‘going offline’ anymore. In order to live, work, learn, connect, most of us must be online.

Tech abuse in any form is isolating and terrifying. But the pain and disruption experienced by victims can be overlooked or underplayed.

As victim’s advocates, we must anticipate the needs of those we advocate for by assessing the risks and potential for harms from emerging technologies. 

We will continue to explore:

  • What difficulties will victims face in obtaining justice for AI-facilitated harm?
  • Are law enforcement and prosecutors prepared to handle AI evidence or prosecute AI crimes?
  • How will the regulatory and legal landscapes keep pace with evolving technology?
  • Will legal and social attitudes towards tech abuse change?
  • How will victims’ rights advocates collaborate with cybersecurity experts, therapists, and other professionals to best serve our clients?
AI: positive potential

Can we imagine a world in which AI helps victims? While risk analysis is essential, we ideate and create outcomes that serve victims, too.

Currently, AI is being used to spot and remove Child Sexual Abuse Imagery at a rate human moderators can’t rival – while also sparing humans the psychological damage of sifting through the very worst content imaginable to stem its’ spread.

In the future, trained AI technologies could look like:

  • 24/7 support services that connect crime or abuse victims with service providers
  • Advancing technologies to help victims of domestic violence assess their digital footprint or scan their devices for vulnerabilities or security compromises
  • Algorithmic prediction of intimate partner violence perpetration through
  • Detecting warning signs or patterns of abuse and alerting authorities or community moderators
  • Advanced and immediate takedown for non-consensual pornography and deepfakes

As artificial intelligence continues to evolve, so must our understanding of its potential impact on tech-facilitated violence and domestic abuse. We will continue to help those affected, and advocate for a safer digital landscape.

If you or a loved one is experiencing technology-facilitated abuse, click here to get in touch with C.A. Goldberg, PLLC’s experienced team to talk about your options.

Connect with C.A. Goldberg, PLLC on LinkedInInstagramFacebookYoutube, and Twitter, to stay up to date with important news and free resources.

Related posts

We are not your attorney. Nothing on our website, blog, or social media should be interpreted as legal advice or the creation of an attorney-client relationship. You should not act or rely on the basis of information on this site without seeking the advice of an attorney. Prior results do not guarantee a similar outcome. Please keep in mind that the success of any legal matter depends on the unique circumstances of each case: we cannot guarantee particular results for future clients based on successes we have achieved in past legal matters.