Crimes Against Humanity and War Crimes Act (S.C. 2000, c. 24) [Link]

Guide III: AI-Assisted Cybertorture
Understanding These Crimes: A White Paper Exploring UN Human Rights Council Report A/HRC/43/49
December 10th, 2025
Coordinated Algorithmic Harassment:
The Emerging Threat of Platform-Mediated Psychological Operations
Abstract
This paper examines the growing phenomenon of coordinated algorithmic harassment campaigns—sophisticated operations that exploit social media recommendation systems to systematically target individuals with psychologically harmful content. Drawing on technical analysis of platform architectures, documented case patterns, psychological research on trauma mechanisms, and emerging legal frameworks, this analysis demonstrates that algorithmic harassment represents a qualitatively distinct and increasingly prevalent form of digital abuse. Unlike traditional online harassment involving direct threats or messages, algorithmic targeting operates through manipulation of content recommendation systems, creating plausible deniability while inflicting severe psychological harm. Evidence from multiple domains—including research on organized cybercrime networks, platform manipulation techniques, documented harassment campaigns, and psychological trauma literature—establishes that coordinated algorithmic harassment is both operationally feasible and actively deployed against vulnerable targets. The paper concludes with policy recommendations for detection, prevention, and legal accountability.
Keywords: algorithmic harassment, coordinated inauthentic behavior, platform manipulation, psychological operations, recommendation systems, organized cybercrime, digital abuse
I. Introduction: The Evolution of Digital Harassment
A. From Direct Threats to Algorithmic Manipulation
Online harassment has evolved dramatically since the early internet era. Initial forms were straightforward: threatening emails, abusive comments, explicit messages sent directly to targets.[1] These direct communication methods, while harmful, operated within a sender-receiver paradigm that legal systems and platform policies could readily identify and address. A threatening message contains explicit hostile content, demonstrates clear intent, and creates an evidence trail that supports intervention.[2]
However, the architecture of modern social media platforms has created new vectors for harassment that operate fundamentally differently. Rather than relying on direct communication, sophisticated actors now exploit algorithmic recommendation systems—the automated content delivery mechanisms that determine what users see in their feeds, recommended videos, and search results.[3] This shift from direct harassment to algorithmic targeting represents a qualitative change in the nature of digital abuse, one that current legal and institutional frameworks struggle to recognize or address.
B. The Algorithmic Targeting Architecture
Modern social media platforms employ machine learning algorithms designed to maximize user engagement by predicting and delivering content each user is likely to interact with.[4] These systems analyze vast datasets of user behavior—clicks, watches, searches, pauses, scrolls—to build detailed psychographic profiles and optimize content recommendations.[5] While platforms present this architecture as neutral personalization serving user interests, the same mechanisms can be weaponized to deliver psychologically harmful content with precision and persistence.
Algorithmic targeting operates through a multi-stage process. First, operators gather intelligence on a target through public information, data broker purchases, surveillance, or insider access.[6] Second, they create or identify content designed to exploit the target's specific psychological vulnerabilities—traumas, fears, beliefs, or identity concerns. Third, they manipulate platform signals (keywords, tags, engagement metrics, network connections) to ensure recommendation algorithms deliver this content to the target.[7] Finally, sophisticated operations implement feedback loops that monitor target responses and automatically adapt content, timing, and delivery methods to maximize psychological impact.[8]
This architecture creates profound asymmetry: targets experience systematic psychological assault while every actor in the chain can plausibly deny responsibility. Content creators claim they produce general material for broad audiences. Platform operators claim algorithmic neutrality—that recommendations simply reflect user interests. Coordinating actors remain invisible behind distributed networks of apparently independent channels.[9] The target knows they are being deliberately targeted but faces nearly insurmountable barriers to proving intentional infliction.
C. Research Questions and Scope
This paper addresses three central questions:
1. Operational Reality: Is coordinated algorithmic harassment actually occurring, or is it primarily theoretical concern? What evidence demonstrates organized actors are exploiting platform architectures to systematically target individuals?
2. Scale and Prevalence: How widespread are these operations? Are they isolated incidents or emerging patterns affecting significant numbers of targets across different contexts?
3. Harm Severity: What psychological, social, and functional impacts do these campaigns produce? How do algorithmic targeting effects compare to traditional online harassment?
The analysis draws on multiple evidence streams: technical research on platform manipulation, documentation of organized cybercrime operations, case studies of targeting campaigns, psychological literature on trauma mechanisms, and emerging legal frameworks addressing technology-mediated abuse. The goal is to establish that coordinated algorithmic harassment is not speculative but operational, not isolated but systematic, and not trivial but severely harmful—requiring urgent policy and legal response.
II. Technical Foundations: How Platform Architectures Enable Targeting
A. Recommendation Algorithm Mechanics
To understand how algorithmic harassment operates, we must first examine the technical architecture of modern content recommendation systems. Platforms like YouTube, TikTok, Facebook, and Instagram employ sophisticated machine learning models designed to predict what content each user will engage with and prioritize that content in feeds, recommendations, and autoplay sequences.[10]
The core recommendation pipeline operates through several stages:
Data Collection: Platforms continuously gather behavioral data from users—every video watched, post liked, comment made, search conducted, link clicked, and millisecond-level engagement metric (pause duration, rewatch segments, scroll speed).[11] This data is augmented with demographic information, device data, location history, and cross-platform tracking through advertising networks and data brokers.[12]
Feature Engineering: Raw behavioral data is transformed into features that characterize user interests, emotional states, and engagement patterns. For example, a user who watches true crime videos late at night while repeatedly pausing might be classified as having anxiety-driven content consumption patterns.[13]
Collaborative and Content-Based Filtering: Algorithms identify patterns across users ("people who watched X also watched Y") and within content ("this video is similar to videos you previously engaged with"). These techniques enable platforms to recommend content even for new users or new content with limited interaction history.[14]
Engagement Prediction: Machine learning models predict the probability that a given user will click, watch, comment on, or share specific content. Predictions are continuously updated based on new behavioral data, enabling real-time adaptation.[15]
Ranking and Delivery: Content is ranked by predicted engagement probability and delivered through feeds, notifications, search results, and autoplay queues. Higher-ranked content receives more visibility, creating feedback loops where popular content becomes more popular.[16]
From the platform's business perspective, this system maximizes advertising revenue by keeping users engaged for longer durations and providing detailed targeting data to advertisers.[17] However, the same architecture enables malicious exploitation: if an attacker can manipulate signals to make harmful content appear "engaging" to a specific user, the algorithm will deliver it persistently and prominently.
B. Exploitable Vulnerabilities in Recommendation Systems
Several characteristics of recommendation algorithms create opportunities for deliberate targeting:
1. Personalization Depth
Modern algorithms don't merely categorize users into broad demographic groups—they build individual psychographic profiles incorporating hundreds of behavioral features.[18] Research demonstrates that digital footprints predict personality traits, political beliefs, sexual orientation, intelligence, and psychological vulnerabilities with accuracy exceeding human judgment.[19] This profiling enables attackers to craft content precisely calibrated to individual psychological weak points.
2. Engagement Optimization Over Safety
Platforms optimize for engagement metrics (watch time, interactions, return visits) rather than user wellbeing or content accuracy.[20] Research shows recommendation algorithms preferentially amplify emotionally arousing, controversial, and polarizing content because such material generates higher engagement.[21] An attacker creating content designed to distress a specific target can exploit this bias—if the target engages with the content (even through distress-driven viewing or attempts to understand the targeting), the algorithm interprets this as positive signal and delivers more similar content.
3. Closed-Loop Adaptation
Recommendation systems continuously update based on user responses, creating feedback loops that can be exploited for escalation.[22] If a target blocks one channel, new similar channels appear. If the target changes consumption patterns to avoid triggers, the algorithm detects the new patterns and adjusts targeting accordingly. This adaptive capacity transforms harassment from a static campaign into a dynamic adversarial optimization problem where the target's defensive actions become intelligence feeding the next attack iteration.
4. Cross-Platform Coordination
While each platform operates independent recommendation algorithms, user tracking across platforms enables coordinated targeting.[23] Data brokers compile comprehensive profiles by linking activity across dozens of services.[24] An attacker with access to this cross-platform data can ensure that avoiding YouTube targeting doesn't provide escape—similar content appears on TikTok, Instagram, Facebook, and podcast platforms simultaneously.
5. Opacity and Deniability
Platform algorithms are proprietary trade secrets, preventing external audit or accountability.[25] Users cannot determine why specific content was recommended or whether recommendations reflect genuine personalization versus deliberate manipulation.[26] This opacity creates perfect deniability: platforms claim algorithmic neutrality, attackers claim coincidence, and targets cannot access the technical evidence that would prove targeting.
C. Signal Manipulation Techniques
Operationally, algorithmic targeting requires manipulating platform signals to ensure harmful content reaches specific individuals. Documented techniques include:
SEO and Metadata Optimization: Content is tagged with keywords, descriptions, and metadata matching the target's known search patterns and interests.[27] If the target frequently searches for legal help regarding employment discrimination, attackers create content about "fake discrimination claims" optimized to appear in those search results.
Engagement Boosting: Initial views, likes, comments, and shares are artificially generated through bots, click farms, or coordinated networks to signal popularity to algorithms.[28] Platforms interpret this apparent engagement as indicating quality content worthy of broader recommendation.
Network Infiltration: Content is uploaded from accounts the target already follows or is algorithmically likely to follow based on social network analysis.[29] When content appears from familiar sources, it bypasses suspicion and achieves higher engagement.
Collaborative Filtering Exploitation: Attackers create fake user accounts with behavioral profiles matching the target's profile, then have these fake accounts engage with harmful content. Collaborative filtering algorithms detect that "similar users" watched specific content and recommend it to the target.[30]
Trend Hijacking: Content is associated with trending topics, hashtags, or cultural moments the target is likely to engage with, piggybacking on organic interest to achieve delivery.[31]
These techniques transform algorithmic recommendations from neutral personalization into precision delivery mechanisms for psychological warfare—targeting can achieve sub-24-hour convergence with private events, mirror sealed court documents, and adapt in real-time to target responses.[32]
III. Organized Actors: The Infrastructure of Coordinated Harassment
A. Crime-as-a-Service and Digital Crime Ecosystems
Understanding who conducts algorithmic harassment campaigns requires examining the broader infrastructure of organized cybercrime. Contemporary digital crime operates through "crime-as-a-service" business models where specialized actors provide technical capabilities to others lacking expertise.[33]
Research by BAE Systems analyzing digital crime infrastructure found that approximately 80% of digital crime originates from organized crime groups rather than individual actors.[34] These groups operate as sophisticated businesses with specialized divisions: some develop malware, others run botnets, others provide money laundering services, and others offer harassment-for-hire targeting specific individuals.[35]
The cybercrime ecosystem includes several relevant services:
Botnet Infrastructure: Networks of compromised computers and fake accounts used to artificially inflate engagement metrics, generate fake views, and create appearance of organic content popularity.[36]
Doxing Services: Researchers compile comprehensive personal information about targets—addresses, phone numbers, family members, employment history, financial records, medical information—sold to harassment operators.[37]
Reputation Management (Inverse): While legitimate reputation management helps individuals repair online presence, criminal variants offer "reputation destruction"—systematically polluting search results, creating defamatory content, and ensuring harmful material ranks highly for target's name.[38]
Social Media Manipulation: Specialized firms offer services including fake followers, coordinated content campaigns, targeted harassment, and algorithmic manipulation to achieve specific outcomes (suppress content, amplify messages, target individuals).[39]
Data Brokers: Legal and quasi-legal businesses compile detailed personal profiles from hundreds of sources (browsing history, purchase records, location data, social media activity, public records) and sell access to anyone willing to pay.[40]
This infrastructure means that conducting sophisticated algorithmic harassment campaigns no longer requires advanced technical skills—it can be purchased as a service from established criminal enterprises with the resources and expertise to execute complex operations.[41]
B. State and State-Adjacent Infrastructures: Structural Preconditions for High-End Campaigns
While low- to mid-level harassment can be orchestrated by non-state actors using commodity tools—purchased botnets, fake accounts, basic SEO—the precision targeting documented in Section IV requires capabilities that are overwhelmingly concentrated in states, state-adjacent contractors, and large institutional actors. In practice, advanced campaigns displaying sealed-file mirroring, sub-24-hour convergence with non-public events, and persistent cross-platform adaptation presuppose structural preconditions that are global in form but locally instantiated in particular jurisdictions, including Canada.
1. Platform Architecture Access
High-end campaigns require more than simply posting content and hoping it travels. They depend on the ability to steer recommendation and advertising systems toward a specific individual or small set of targets in near real time. That implies some combination of:
-
Direct or privileged access to platform architecture, whether through formal cooperation (e.g., law-enforcement or “national security” interfaces), informal back-channels, contractor arrangements, or covert infiltration of internal tools that influence ranking, custom audiences, content moderation, or enforcement; and
-
State-level technical capabilities, akin to signals-intelligence infrastructure, capable of ingesting and modelling vast, multi-platform data streams and feeding targeting instructions back into platform systems at operational tempo.
Real-time manipulation of recommendation outputs is not something the average user—or even a typical criminal group—can achieve at scale. It is native to states and surveillance-capitalist platforms, and to those private actors entrusted with, or embedded within, those systems.
2. Deep Cross-Domain Data Integration
Campaigns that align tightly with sealed court documents, confidential medical or psychiatric information, and non-public procedural milestones require access to data that cannot be reconstructed from open sources or casual doxing. They rely on:
-
Legal-system data access – direct or indirect visibility into court files, registry systems, payment-into-court mechanisms, and case-management metadata, including materials that are formally sealed or practically inaccessible to ordinary litigants and the public;
-
Multi-platform integration – the ability to correlate those legal events with behavioural telemetry and recommendation pathways across YouTube, TikTok, Instagram, Facebook, podcasts, and search so that a confidential filing or in camera hearing can be echoed across a target’s feeds within hours; and
-
Surveillance-state–style event monitoring, where signals from courts, health systems, financial infrastructure, and online activity can be fused into a single operational picture and quickly turned into tasking for harassment content.
These are precisely the kinds of data-fusion capabilities that have emerged at the intersection of surveillance capitalism and contemporary intelligence practice. They are not realistically available to the average “Tom, Dick, or Harry” without some form of state, quasi-state, or platform-level access.
3. Operational Security and Institutional Protection
Sustaining years-long, deniable campaigns that survive platform enforcement and victim complaints also requires:
-
Counter-forensics expertise – knowledge of how platforms detect coordinated inauthentic behaviour, how law enforcement conducts basic cyber-investigations, and how to stay under those thresholds by rotating accounts, infrastructure, and tactics;
-
Robust coordination infrastructure – command-and-control, tasking, compartmentalization, and payment systems that are not easily disrupted by conventional police action or platform bans; and
-
Institutional cover or protection, formal or informal, such that complaints to police, regulators, or courts stall, are misdirected, or even boomerang back onto the victim through psychiatric framing or procedural sanctions.
Where these conditions are met, the operation is no longer plausibly described as mere “online abuse”. It begins to resemble a psychological operations campaign in the sense described in Joint Publication 3-13.2 and subsequent information-warfare doctrine: a structured effort to shape cognition and behaviour through persistent, multi-channel influence, with deliberate use of deniability and asymmetry.[42]
4. Global and Canadian Implementations of the Capability Stack
Documented information operations show that such state-adjacent capability stacks are not speculative. Globally, Russian operations conducted by the Internet Research Agency during the 2016 U.S. elections demonstrated sophisticated social-media manipulation, bot networks, fake personas, and recommendation gaming at national scale.[43][46] Chinese influence operations have deployed fake accounts, coordinated harassment, and platform manipulation to target dissidents and critics, including individual journalists and activists.[44] Mercenary cyber-operations and private intelligence firms—operating in a grey zone between state and market—sell surveillance, disinformation, and harassment services to government and corporate clients.[45]
In Canada, analogous technical and institutional capacities are already in place, albeit framed in terms of national security, law enforcement, and “public safety”:
-
The Communications Security Establishment (CSE) has repeatedly warned that state and state-sponsored actors conduct cyber-threat activity against Canada’s democratic processes, with social-media manipulation and algorithmically amplified propaganda identified as core techniques.[164] Bill C-59’s Communications Security Establishment Act authorizes CSE not only to collect foreign intelligence but also to conduct active cyber operations and to share information broadly with domestic partners and foreign allies, embedding Canadian infrastructures in a Five Eyes surveillance and operations network.[165][166][167]
-
At the law-enforcement level, Canadian police services have piloted and, in some cases, deployed algorithmic policing tools, including social-media monitoring and predictive systems. Citizen Lab’s To Surveil and Predict concludes that such tools raise serious concerns under the Charter and international human-rights law, particularly around opacity, bias, and due-process deficits.[168]
-
Canadian authorities have also purchased and used commercial surveillance systems. The Office of the Privacy Commissioner found that Clearview AI’s facial-recognition service violated Canadian privacy law, and that the RCMP’s use of Clearview breached the Privacy Act by drawing on an unlawful biometric database.[169] Citizen Lab’s Installing Fear documents a domestic market for stalkerware and smartphone spyware capable of covertly exfiltrating communications and location data,[170] while more recent reporting links Canadian police services to mercenary spyware such as Paragon’s “Graphite” and confirms RCMP deployment of on-device investigative tools (ODITs).[171]
-
Proposed and existing data-sharing frameworks further entrench a state–corporate nexus. CSE’s statutory information-sharing authorities, combined with emerging cross-border data-sharing schemes such as Bill C-2, position Canadian platforms and service providers within a transnational enforcement and intelligence network in which user data can circulate broadly under colour of law.[166][167][172]
None of these developments, taken alone, prove the existence of algorithmic harassment campaigns targeting specific Canadian litigants or whistleblowers. They do, however, show that Canada already maintains state-level infrastructures capable of ingesting, modelling, and operationalizing the same classes of data and platform interfaces required for the kinds of operations described in Section IV. Hence, the inference framework under R. v. Villaroman, 2016 SCC 33 applies, as is noted in Section VI.
5. From “Cybercrime” to State-Adjacent Psychological Operations
Taken together, the structural preconditions and institutional instantiations outlined above support a central claim of this paper: algorithmic harassment at the sophistication levels documented in Section IV cannot be understood as merely ordinary cybercrime. While commodity tools enable low-grade harassment to be bought and sold on crime-as-a-service markets, campaigns characterized by:
-
tight temporal alignment with sealed or confidential legal and medical events;
-
apparent access to non-public records typically held in legal, medical, or law-enforcement systems;
-
cross-platform orchestration aimed at a specific individual; and
-
long-horizon persistence under institutional scrutiny,
are far more plausibly explained as state-run, state-adjacent, or institutionally shielded psychological operations delivered through commercial platforms. In this context, the range of coherent perpetrator models effectively collapses to two families: (i) state or state-adjacent actors leveraging institutional data and platform access; or (ii) highly resourced corporate “black-ops” replicating the same visibility through in-vivo sensing and commercial surveillance infrastructures, rather than direct access to public-agency databases. It precludes small actors altogether.
This does not mean that every such case is directly commanded by a named intelligence agency such as CSE or CSIS. Rather, it recognizes that the delivery mechanism itself—manipulation of industrial-scale recommendation and advertising systems—is structurally rooted in a state–corporate nexus. In Canada, that nexus links national-security agencies, police services, commercial platforms, and data brokers in ways that make misuse technically feasible and institutionally difficult to detect or contest.
Any serious regulatory or legal response must therefore address not only nominal “perpetrators” but the institutional and geopolitical architectures that make such targeting possible: national-security and policing statutes, information-sharing regimes, oversight mechanisms, and the design and governance of platform infrastructures themselves.
C. Corporate and Civil Litigation Contexts
Algorithmic harassment can show up in commercial and civil litigation contexts where well-resourced entities seek to silence critics, intimidate whistleblowers, or gain advantage in legal proceedings.
Several mechanisms enable this:
Reputation Management Firms: Legitimate reputation management sometimes crosses into coordinated harassment—creating negative content about litigation opponents, manipulating search results to prominence harmful material, or conducting "opposition research" that becomes weaponized.[47]
Litigation Strategy: In high-stakes civil cases, some defense firms employ aggressive tactics including surveillance of plaintiffs, social media monitoring, and strategic content creation designed to undermine credibility or create psychological pressure to settle.[48]
Whistleblower Retaliation: Documented cases show corporations responding to whistleblowers with coordinated campaigns including online harassment, reputation destruction, and psychological pressure tactics designed to discourage others from reporting misconduct.[49]
SLAPP Suits as Harassment: Strategic Lawsuits Against Public Participation (SLAPP) use legal process itself as harassment tool; algorithmic targeting can complement this by creating online environments that reinforce litigation pressure psychologically even when legal claims are meritless.[50]
The corporate context is particularly concerning because legitimate business entities can purchase harassment services through intermediaries, maintaining plausible deniability while deploying operational capabilities exceeding those of individual criminals.
The actors described above operate at the top end of the capability spectrum, with privileged access to data, infrastructure, or institutional protection. However, not all algorithmically mediated harassment requires this level of sophistication. A significant share of what targets experience on platforms involves loosely coordinated or emergent networks whose activity may be organic, opportunistically exploited, or deliberately seeded by higher-order operators. Section III.D examines this lower- to mid-tier layer.
D. Networked Harassment and Mobbing
Not all organized algorithmic harassment exhibits the high-end, state-adjacent characteristics outlined in Section III.B. A substantial portion of what targets encounter on platforms arises from loosely coordinated networks engaging in ideologically motivated mobbing, dogpiling, and brigading. These networks often lack formal organizational structures or direct access to privileged data, but they can still generate severe harm when their activity is amplified and sorted by recommendation systems. In some cases, they emerge organically from existing grievances or subcultures; in others, they are nudged, seeded, or amplified by the more capable actors described earlier, who exploit networked harassment as the visible surface of deeper psychological operations.
Online harassment research documents several patterns:
Coordinated Dogpiling: Multiple accounts simultaneously target an individual with abuse, threats, or defamatory content. While this often involves direct messaging, algorithmic amplification occurs when coordinated accounts post similar content that recommendation algorithms interpret as trending or popular.[51]
Brigading: Groups coordinate to mass-report content for platform removal, flood targets with unwanted engagement, or manipulate voting/ranking systems to suppress target visibility while amplifying harassment content.[52]
Doxing Networks: Communities share personal information about targets, collectively compile comprehensive profiles, and coordinate harassment across platforms.[53] Research on toxic online communities shows sophisticated coordination using dedicated channels (Discord servers, Telegram groups, private forums) invisible to platforms and targets.[54]
Ideological Radicalization Pipelines: Studies document how recommendation algorithms facilitate radicalization pathways, leading users from mainstream content toward increasingly extreme material.[55] While typically discussed in terrorism or political extremism contexts, the same mechanisms enable harassment networks—algorithms connect users interested in targeting specific individuals or groups, creating self-reinforcing communities.
These networked forms of harassment blur the line between organized crime operations and emergent collective behavior—coordination exists without central control, creating distributed operations difficult to attribute or disrupt. In practice, therefore, networked harassment should be understood both as a distinct phenomenon—capable of arising without state-grade infrastructure—and as a weaponizable layer that higher-capability actors can trigger, steer, or hide behind, allowing sophisticated campaigns to masquerade as spontaneous online hostility.
IV. Evidence of Operational Reality: Documented Cases and Patterns
A. Academic Documentation of Platform Manipulation
Multiple streams of academic research document that platform manipulation for targeting purposes is not theoretical but operational.
Bot Networks and Coordinated Inauthentic Behavior: Research quantifying social bot prevalence finds that 9-15% of active Twitter accounts are bots, with higher percentages on other platforms.[56] These bots engage in coordinated activity including amplifying specific content, harassing targets, and gaming recommendation algorithms.[57] Studies analyzing social bot behavior show they systematically spread misinformation and increase exposure to inflammatory content.[58]
Algorithmic Amplification Studies: Researchers analyzing YouTube recommendation patterns document systematic amplification of progressively extreme content—users watching mainstream political content receive recommendations for increasingly radical material.[59] While this research focuses on radicalization rather than individual targeting, it demonstrates that recommendation algorithms can be manipulated to create directional content exposure pathways.
Harassment Campaign Documentation: Academic studies of online harassment campaigns find coordinated patterns including multiple accounts targeting individuals, strategic timing of harassment around specific events, and cross-platform coordination.[60] Research on gender-based online violence documents sophisticated campaigns involving doxing, impersonation, and coordinated abuse.[61]
Data Broker Capabilities: Investigative research into data broker industry reveals that detailed personal profiles—including precise location history, purchasing behavior, browsing history, health information, and social relationships—are commercially available for minimal cost.[62] One investigation found that for $160, researchers could purchase a dataset of mental health sufferers including names, addresses, and specific conditions.[63]
B. Case Pattern Analysis: Litigation Retaliation
While opacity may obfuscate numerous individual cases, characteristic patterns emerge across multiple documented instances:
Pattern Elements:
- Target files civil lawsuit (employment discrimination, fraud, whistleblower retaliation, civil rights violation)
- Within weeks, themed YouTube channels appear with content about "false accusers," "frivolous lawsuits," or topics paralleling the case
- Content includes imagery, scenarios, or language with unusual specificity to the case details
- Recommendations surge around court dates, filing deadlines, or notable private events
- Multiple apparently independent channels emerge with coordinated themes often with verbatim scripting and symbolism
- Content evolves to attack target's credibility, mental health, or motivations
- Ongoing escalation
Convergence Evidence:
- Videos posted within days, hours, and sometimes minutes of sealed court milestones discussing identical subject matter
- AI-generated thumbnail images matching exhibits filed under seal
- Content using exact phrasing from confidential legal documents
- Verbatim scripting and symbolism (clothing, props, etc.)
Institutional Responses:
- Police decline investigation, file false reports, and pathologize the victim
- Platforms claim algorithmic neutrality and deny policy violations
- Defense counsel dismisses evidence as though it doesn't exist, is irrelevant, or not connected to civil proceeding
- Courts seal the evidence anyway (even though the posts are public), preventing public scrutiny of convergence patterns
These patterns appear across multiple independent cases with different parties, jurisdictions, and legal issues—suggesting systematic rather than coincidental targeting.[64]
C. Whistleblower Targeting Patterns
Similar patterns emerge in whistleblower contexts:
- Employee reports misconduct through internal or regulatory channels
- Social media accounts appear discussing "fake whistleblowers," "disgruntled employees," or industry-specific retaliation themes
- Content includes details matching confidential reports or internal communications
- Timing converges with regulatory filings or investigation milestones
- Network infiltration—content shared by fake accounts claiming industry connections
- Institutional foreclosure—employers retaliate, regulators decline investigation, platforms deny coordination
The psychological impact on whistleblowers is particularly severe because the targeting combines employment retaliation, legal complexity, and isolation—whistleblower protection laws rarely address algorithmic harassment, leaving targets without remedy even when other retaliation is legally prohibited.[65]
D. Personal Relationship and Family Law Contexts
Algorithmic targeting also appears in personal relationship contexts including divorce, custody disputes, and family conflicts:
- Content appears from family-adjacent actors or connected persons
- "Prophetic" or "spiritual" framing provides deniability ("God told me" versus "I am deliberately harassing you")
- Themes attack parenting, moral character, mental health, or spiritual standing
- Coordination with private family events (birthdays, therapy sessions, medical appointments)
- Convergence with sealed family court documents
- Institutional weaponization—courts order psychiatric evaluations of targets; medical systems interpret complaints as paranoid ideation
Family law contexts are particularly vulnerable because court proceedings involve highly personal information, family members may have access to private details, and legal standards emphasize "stability"—making allegations of harassment easily reframed as instability warranting custody loss or protective orders against the target rather than the harasser.[66]
E. Quantitative Evidence: Harassment Prevalence Studies
While systematic data on algorithmic harassment specifically remains limited, broader online harassment research provides context:
General Prevalence: Pew Research Center studies find that 41% of Americans have experienced online harassment, with 25% experiencing more severe forms including physical threats, stalking, sexual harassment, or sustained harassment.[67] Notably, 18% report being targeted by strangers, suggesting organized rather than interpersonal harassment. Sheridan et al., 2020 began with a study group of over twenty million online citations, and concluded organized "gang-stalking" is a "widespread phenomenon" that has largely evaded scrutiny.[156]
Targeted Demographics: Women, particularly young women, experience disproportionate online harassment—especially sexual harassment and stalking.[68] LGBTQ+ individuals, racial minorities, and public figures face elevated harassment risks.[69] Activists, journalists, and political candidates are frequently targeted with coordinated campaigns.[70]
Platform-Specific Studies: Research on specific platforms reveals coordinated targeting patterns. Twitter analysis found that a small number of highly active accounts generate disproportionate harassment, consistent with organized rather than organic behavior.[71] YouTube research documents brigading and coordinated flagging campaigns targeting specific creators.[72]
Employment and Economic Impacts: Studies document severe consequences for harassment targets including changing employment (27%), experiencing reputation damage (22%), and physical safety concerns (17%).[73] For severe harassment, rates are higher—53% of targets report persistent fear and negative emotional states.[74]
While these studies don't specifically isolate algorithmic harassment from other forms, they establish baseline prevalence rates and impact severity. Algorithmic targeting represents a subset of online harassment—one characterized by greater sophistication, persistence, and psychological precision.
V. Psychological Mechanisms: Why Algorithmic Harassment Causes Severe Harm
A. Neurobiological Foundations of Trauma
To understand why algorithmic harassment produces severe and lasting psychological injury, we must examine the neurobiological systems that respond to chronic threat.
Stress Response Architecture: Human stress response evolved for acute, time-bounded dangers. When threat is detected, the hypothalamic-pituitary-adrenal (HPA) axis activates, releasing cortisol and triggering fight-flight-freeze responses.[75] This mobilization is metabolically expensive but sustainable because recovery periods allow system recalibration.
Chronic threat—danger persisting daily for months or years—forces continuous activation of emergency systems. This produces several documented changes:
HPA Axis Dysregulation: Chronic stress alters cortisol rhythms, potentially causing either blunted responses (exhaustion) or heightened reactivity (sensitization).[76] Meta-analyses document HPA axis dysregulation in PTSD and chronic trauma survivors.[77]
Hippocampal Volume Reduction: The hippocampus, critical for memory and contextual learning, is particularly vulnerable to chronic stress. Multiple studies document hippocampal volume reduction visible on MRI in PTSD patients, with meta-analyses confirming effect sizes around 6-8% volume decrease.[78] Importantly, research specifically examining torture survivors finds even more pronounced hippocampal effects.[79]
Amygdala Hyperactivity: The amygdala, which detects threats and triggers fear responses, becomes hyperactive under chronic stress, causing over-attribution of threat to neutral stimuli.[80]
Prefrontal Cortex Impairment: Chronic stress degrades prefrontal function—the executive control system regulating emotion, decision-making, and impulse control.[81] This manifests as concentration difficulties, impaired judgment, emotional volatility, and reduced cognitive flexibility.
Brain Network Dysregulation: Recent neuroimaging research reveals altered connectivity between major brain networks in trauma survivors.[82] Specifically, research on torture survivors documents disrupted intrinsic network connectivity patterns distinguishing them from other trauma populations.[83]
Critically, these changes are not temporary distress that resolves when stressors end—they are structural and functional alterations to brain systems, observable on neuroimaging, that persist years after trauma exposure and often resist treatment.[84]
B. Complex PTSD and Chronic Interpersonal Trauma
Beyond basic PTSD, chronic interpersonal trauma produces Complex PTSD (CPTSD)—a syndrome recognized in the ICD-11 involving standard PTSD symptoms plus three additional clusters reflecting chronic traumatization.[85]
CPTSD Additional Features:
1. Affect Dysregulation: Difficulty controlling emotions, explosive anger, self-destructive impulses, rapid swings between numbness and overwhelm.[86]
2. Negative Self-Concept: Persistent shame, guilt, worthlessness, and identity confusion—the sense of self fragments or destabilizes.[87]
3. Interpersonal Impairment: Difficulty trusting others, maintaining relationships, or feeling connected.[88]
Research distinguishes CPTSD from standard PTSD through trauma characteristics: CPTSD typically arises from prolonged, repeated trauma involving interpersonal betrayal (childhood abuse, domestic violence, torture, captivity).[89] Algorithmic harassment shares key features: prolonged duration (operations spanning years), interpersonal architecture (apparent involvement of family, institutions, or social networks), and intentional infliction by human actors.
Why Interpersonal Trauma Is Worse: When trauma is impersonal (natural disaster, accident), the world feels dangerous but people remain sources of safety. When trauma is interpersonal and intentional, people become threats.[90] This creates:
- Trust collapse: If family, therapists, or police dismiss or weaponize information, who can be trusted?
- Attachment disruption: Bonds that should provide security instead trigger fear
- Identity attacks: When targeting exploits spiritual beliefs, family roles, or professional identity, core self-concept collapses
Prevalence studies find approximately 3-4% of U.S. adults meet CPTSD criteria, with rates substantially higher in populations exposed to prolonged interpersonal trauma.[91]
C. Learned Helplessness and Cognitive Autonomy Destruction
A crucial psychological mechanism in understanding algorithmic harassment severity is learned helplessness—the state where individuals stop attempting to escape adversity because they have learned their actions are ineffective.[92]
Classic learned helplessness experiments exposed animals to inescapable shock—no action could stop or avoid it. Subsequently, when escape became possible, animals didn't attempt it.[93] In humans, learned helplessness produces:
- Contingency judgment: "Nothing I do affects outcomes"
- Futility expectation: "Trying is pointless"
- Motivational collapse: Passivity and cessation of goal-directed behavior
- Cognitive deficits: Difficulty learning new responses
- Emotional deficits: Depression, resignation, hopelessness[94]
Algorithmic Harassment as Helplessness Engineering: Every design element reinforces helplessness:
- Inescapability demonstrated: Blocking channels → new channels appear; different platforms → content follows; disconnecting → content queues for return
- Control removed: Cannot identify perpetrator, stop operation, prove intent, or access legal remedies
- Coping defeated: Therapy → content mocks therapy; social support → content creates conflicts; legal action → escalation
- Adaptation prevented: If distress decreases, system escalates until distress returns
- Agency eliminated: Institutions attribute harm to target's "paranoia" rather than perpetrator action
Research on uncontrollable stress demonstrates it produces more severe psychological harm than controllable stress of equal objective intensity—the critical variable is not pain but powerlessness.[95] This finding from experimental psychology directly parallels torture research: studies of torture survivors identify loss of control, not pain severity, as the primary predictor of trauma depth.[96]
D. Comparison to Recognized Torture Methods
The psychological mechanisms documented in algorithmic harassment targets closely parallel those observed in recognized torture survivors:
Isolation and Social Death: Torture often involves isolating victims from support systems. Algorithmic harassment achieves isolation through deniability—targets cannot prove the operation exists, leading to disbelief from family, friends, and institutions.[97]
Identity Attack: Torture frequently targets core identity—political beliefs, religious faith, professional identity, family roles. Algorithmic content is often designed specifically to attack these identity foundations through symbolic targeting, spiritual messaging, or professional discrediting.[98]
Unpredictability: Torture effectiveness increases when victims cannot predict timing or intensity. Algorithmic delivery's variable timing and platform diversity creates exactly this unpredictability.[99]
Prolonged Duration: While physical torture sessions are typically bounded by interrogation needs, psychological torture can continue indefinitely. Algorithmic operations spanning years ensure exposure exceeding thresholds for permanent neurobiological changes.[100]
Institutional Betrayal: When state institutions meant to protect instead dismiss, blame, or psychiatric victimize torture survivors, trauma deepens profoundly. Algorithmic targeting routinely produces this institutional betrayal when police decline investigation, courts seal files to "protect" targets, and medical systems interpret complaints as paranoid delusions.[101]
Research comparing torture survivors to other trauma populations finds distinctive patterns—not just PTSD but profound identity fragmentation, pervasive distrust, cognitive incapacitation, and resistance to treatment.[102] Documented algorithmic harassment targets report remarkably similar symptom profiles.
VI. Legal Frameworks and Institutional Gaps
A. Current Legal Inadequacy
Existing legal frameworks addressing online harassment prove largely ineffective against algorithmic targeting for several structural reasons:
Criminal Law Barriers: traditional criminal harassment and cyberstalking statutes require identifiable perpetrators and explicit threats or communications.[103]
Algorithmic targeting circumvents these requirements:
- Content creators claim general audience content without specific targets
- Platform algorithms intermediate delivery, breaking direct perpetrator-victim connection
- Timing coincidences are dismissed as pattern-seeking by complainants
- Intent becomes nearly impossible to prove without access to internal communications
Courts have generally rejected attempts to expand criminal harassment statutes to cover algorithmic recommendations, viewing platforms as neutral intermediaries rather than instrumentalities of harassment.[104]
Civil Remedy Barriers:
Civil lawsuits face procedural and evidentiary obstacles:
- Identification: Plaintiffs must identify defendants, but algorithmic operations involve distributed networks of channels potentially operated by anonymous actors
- Causation: Proving that platform recommendations caused harm requires demonstrating algorithmic manipulation, typically requiring expert technical testimony and platform data access
- Damages: Courts often minimize psychological harm without physical injury, and economic damages from reputation destruction are difficult to quantify
- Costs: Discovery costs for technical forensics, expert witnesses, and multi-defendant litigation often exceed potential recovery, making cases economically nonviable[105]
Platform Immunity:
Section 230 of the US Communications Decency Act provides broad immunity to platforms for user-generated content.[106] While courts have carved out exceptions for platform conduct contributing to harm, recommendations are generally treated as editorial functions protected by Section 230.[107] Platforms successfully argue that algorithmic curation is protected speech, preventing liability for harmful recommendation patterns.
Privacy and Data Protection:
Data protection laws like GDPR and CCPA provide some user rights regarding data collection and use but don't address algorithmic targeting specifically.[108] Enforcement is typically initiated by regulators rather than individuals, limiting practical utility for harassment victims. Moreover, data brokers operating in jurisdictions with weak regulation can compile comprehensive profiles that enable targeting despite theoretical protections.[109]
B. The UN Cybertorture Framework
In March 2020, UN Special Rapporteur on Torture Nils Melzer presented Report A/HRC/43/49, which provides an international legal framework for understanding technology-mediated psychological harm as potential torture under the Convention Against Torture.[110]
Melzer's Core Arguments:
1. Psychological torture is legally equivalent to physical torture: Severe mental suffering meets CAT Article 1 requirements even without physical pain or visible injury.[111]
2. Cybertechnology enables torture: Digital technologies can inflict severe mental suffering while avoiding physical evidence, creating operational advantages for perpetrators and profound challenges for victims.[112]
3. Four elements analysis: Torture requires (a) severe pain or suffering, (b) intentional infliction, (c) prohibited purpose (punishment, coercion, intimidation, discrimination), and (d) powerlessness (typically state involvement or acquiescence).[113]
4. Digital powerlessness: Powerlessness need not require physical custody—it exists when victims "effectively lose capacity to resist or escape infliction of pain or suffering," which can occur through digital dependency, institutional foreclosure, or coercive control.[114]
Application to Algorithmic Harassment:
Sophisticated algorithmic targeting operations can meet all four torture elements:
- Severe suffering: Documented neurobiological damage, CPTSD, learned helplessness, and functional impairment
- Intentional infliction: Pattern evidence (timing precision, personalization, coordination) excluding coincidence
- Prohibited purpose: Operations typically arise from or escalate around protected activity (litigation, whistleblowing, speech)
- Powerlessness: Digital dependency (cannot function in modern society without platforms) combined with institutional foreclosure (no legal remedy available)
While the Melzer framework has not yet been broadly applied in domestic courts, it provides an international law foundation for treating algorithmic harassment as a serious human rights violation rather than mere technological nuisance.[115]
C. Evidentiary and Procedural Challenges
Beyond substantive legal gaps, procedural and evidentiary barriers prevent effective response to algorithmic harassment.
Pattern Evidence Versus Direct Proof:
Algorithmic targeting is typically proven through circumstantial pattern evidence—for example, consistent timing alignment between online content and offline events, tailored thematic content that reflects private information, and coordination across multiple channels or platforms.[140][156]
Canadian criminal law already recognizes a structured way to reason from circumstantial evidence. In R. v. Villaroman, 2016 SCC 33, the Supreme Court emphasized that where the Crown relies on circumstantial proof, the trier of fact must:
-
assess the evidence in light of ordinary human experience, and
-
ask whether guilt is the only reasonable inference, while recognizing that not every imaginative or merely conceivable alternative must be excluded.[161]
Villaroman, drawing on the Alberta Court of Appeal in Dipnarine, treats circumstantial proof as legitimate so long as the inferences drawn are reasonable and grounded in the evidentiary record, not speculation.[161] This is essentially a coherence-based framework: the fact-finder evaluates the whole pattern and asks which explanation best fits the evidence while eliminating other reasonable alternatives, not every far-fetched hypothesis.
Applied to algorithmic harassment, Villaroman suggests that courts and investigators should not dismiss pattern evidence simply because there is no “smoking gun” message. Instead, they should:
-
map out competing explanations (ordinary algorithmic noise, user misperception, targeted manipulation), and
-
use timing data, cross-platform coordination, and content personalization to rule out benign explanations where the pattern is too precise and persistent to be plausibly accidental.[140][156][161]
At present, however, courts trained to focus on direct threats or explicit communications often treat sophisticated pattern evidence as too close to conjecture. Correlation is frequently deemed inadequate to prove causation, especially where the underlying algorithms are opaque and proprietary.[116] Villaroman’s inference framework offers doctrinal cover for a more rigorous, coherence-based approach, but it has not yet been systematically mobilized in the context of platform-mediated harms.
Technical Complexity:
Most judges, lawyers, and traditional experts lack the technical literacy required to evaluate claims of algorithmic manipulation. Establishing that a particular recommendation stream reflects coordinated targeting rather than ordinary platform behaviour demands:
-
detailed log-level analysis,
-
knowledge of ranking and moderation systems, and
-
comparative baselines (e.g., control accounts, shadow profiles).
As Pasquale documents, commercial platforms are often “black boxes” whose internal workings are deliberately obscured from public view.[25][117] Such expert analysis is expensive and, in many cases, practically unavailable to individual litigants or publicly funded counsel.
Platform Data Access and Discovery:
Definitively proving algorithmic targeting typically requires access to internal platform data such as:
-
recommendation logs and engagement telemetry,
-
model documentation or algorithm parameters, and
-
internal tools for detecting coordinated inauthentic behaviour.[137][142]
Platforms routinely resist production of this information, invoking trade secrets, privacy obligations, and jurisdictional limits.[118][143][144] In practice, courts—concerned with proportionality and third-party burdens—often hesitate to compel extensive technical discovery, leaving complainants unable to access the very evidence needed to substantiate their claims.
Jurisdictional Fragmentation and Local Harms:
Coordinated operations may involve:
-
content creators dispersed across multiple countries,
-
platforms headquartered outside Canada, and
-
targets whose digital footprint spans several jurisdictions.[9][138]
At first glance, this transnational structure appears to dilute accountability: no single court has obvious authority over all actors, infrastructure, and data flows. However, Canadian law has already developed tools to treat locally felt harms as a sufficient anchor for jurisdiction.
In Libman v. The Queen, the Supreme Court adopted the “real and substantial link” test for offences with extraterritorial elements, holding that Canada may assert jurisdiction where a significant part of the harmful effects occur and/or are felt in Canada.[162] In SOCAN v. CAIP, the Court applied a similar effects-based approach to Internet communications, confirming that transmissions and uses that materially impact rights-holders in Canada can fall within Canadian regulatory and adjudicative authority, even when servers and intermediaries are partly abroad.[163]
For algorithmic harassment, this means that the psychological, reputational, and economic harms experienced by targets in Canada—along with Canadian-based viewing, data collection, or ad-targeting—can supply the necessary real and substantial link. Rather than creating a legal vacuum, jurisdictional fragmentation can therefore be framed as offering multiple entry points: Canadian criminal, civil, and regulatory processes can be anchored in the locus of harm, while international instruments on cybercrime and cooperation help address foreign elements.[154][155] Comparative cases such as Yahoo! Inc. v. LICRA simply underscore the need to manage conflicts of law carefully, not to abandon locally affected victims to a jurisdictional void.[119]
Psychiatric Weaponization and Credibility Erosion:
Finally, when individuals report algorithmic harassment to authorities or institutions, the response often centres on psychiatric assessment rather than technical investigation. Complaints about highly tailored, hard-to-verify targeting are readily reframed as symptoms of paranoia or delusional thinking—especially when the patterns are subtle, longitudinal, or technically complex.[156]
This dynamic produces a particularly damaging double bind:
-
Remaining silent leaves the harassment unchallenged and ongoing.
-
Seeking help risks being pathologized, subjected to psychiatric intervention, and having one’s credibility permanently undermined in future legal or administrative proceedings.
The concern is amplified by Canadian jurisprudence: in Blencoe v. British Columbia (Human Rights Commission), the Supreme Court recognized that serious, state-imposed psychological stress can engage security of the person under s. 7 of the Charter, even as it set a demanding threshold for when such harm will be judicially acknowledged.[159][160] In combination, these evidentiary and procedural challenges help explain why, even under comparatively robust Canadian constitutional and human-rights norms, coordinated algorithmic harassment remains functionally unredressed by existing institutions.
VII. Scale and Prevalence: Indicators of a Growing Problem
A. Indirect Evidence of Prevalence
As of December 2025 and to the author's knowledge, direct, population-level statistics on algorithmic harassment campaigns are still scarce. But several converging empirical trends point to a problem that is both real and growing.
1. Platform manipulation and influence industry growth
Research on “organized social media manipulation” documents a rapid global expansion of coordinated influence operations. A 2019 Oxford Internet Institute report found organized manipulation campaigns in 70 countries, up from 28 in 2017, with subsequent updates identifying further growth and professionalization.[121] Although much of this activity targets elections or public opinion rather than individual litigants, the infrastructure—botnets, influence farms, targeting expertise, and measurement tools—is technically transferable to person-focused harassment.
2. Data broker and behavioural profiling markets
The data broker ecosystem that supplies granular targeting intelligence has grown into a multi-hundred-billion-dollar industry.[122] Commercial dossiers combining location history, purchase records, inferred interests, and psychographic attributes are available at low marginal cost.[123] While such data do not normally include sealed court records or medical files, they provide the behavioural “surface area” that can be exploited once more sensitive information is introduced from state or institutional sources, as discussed in Sections III and VII(D).
3. Cybercrime-as-a-service and harassment tooling
Studies of cybercrime-as-a-service describe a flourishing market for turnkey operations: bot rentals, fake account creation, content amplification, and paid harassment or “reputation destruction” packages.[124] These services lower the technical threshold for sophisticated operations and create a ready interface between conventional criminal actors and the more advanced, state-adjacent infrastructures described earlier.
4. Worsening online harassment trends
Longitudinal surveys indicate that online abuse is becoming both more common and more severe. Pew Research Center data show that the proportion of Americans reporting “severe” online harassment (e.g., physical threats, sustained harassment, stalking, sexual harassment) rose from 18% in 2014 to 25% in 2021.[125] Within that population, the share describing their experience as involving “organized” or “coordinated” harassment has increased, suggesting a shift from isolated abuse toward more networked forms of targeting.[126]
5. Cross-platform and multi-account synchronization: the missing piece narrows
Until recently, one of the main empirical gaps concerned direct measurement of coordinated behaviour across multiple platforms and accounts—the kind of synchronization that algorithmic harassment relies on. That gap is now narrowing:
-
Large-scale studies of coordinated behaviour on single platforms have shown that accounts engaged in influence operations can be detected by analyzing synchronized actions (e.g., near-simultaneous posting of the same URLs, hashtags, or narratives).[228]
-
The “synchronized action framework” developed by Magelinski and colleagues formalizes this approach, modelling coordination as statistically unlikely temporal and content overlaps across many accounts, and demonstrating that such metrics can reliably surface organized campaigns.[229]
-
More recent work extends these methods across platforms. Minici, Cinus, Luceri, Ferrara and others have developed techniques to detect coordinated inauthentic activity linking, for example, X (Twitter), YouTube, and other platforms in the run-up to the 2024 U.S. election.[230] These studies show that multi-platform operations can be identified and quantified by tracking shared links, narratives, and timing patterns across different services.
-
A growing literature on cross-platform information operations now treats multi-platform synchronization as a standard analytical object rather than an anomaly, demonstrating that coordinated campaigns routinely span several platforms and language communities at once.[231]
Taken together, this emerging body of work does not yet provide a neat prevalence figure for person-targeted algorithmic harassment. But it does establish that (a) cross-platform, multi-account coordination is empirically measurable at scale, and (b) such coordination is already being detected in the wild in the context of political and geopolitical operations. Technically, nothing prevents the same methods from being applied to campaigns targeting individuals rather than electorates—particularly when the same infrastructure (bots, data pipelines, narrative toolchains) is involved.
6. Gang-stalking and coordinated targeting as a documented experiential pattern
The experience of coordinated targeting—often labelled “gang-stalking” by those affected—has long been reported by individuals who describe systematic surveillance, multi-perpetrator harassment, and technology-mediated psychological operations. Historically, these reports have been dismissed as implausible or pathologized as evidence of mental illness.
A recent content-analysis study treated these accounts as experiential data rather than symptoms, documenting recurring patterns of perceived surveillance, multi-actor coordination, and psychological harm.[156] The authors concluded that such experiences are sufficiently patterned and harmful that they warrant serious study rather than reflexive dismissal.
The convergence between these documented experiential patterns and the technical mechanisms of algorithmic harassment explored in this paper suggests that we may be observing different facets of a broader underlying phenomenon. Algorithmic targeting provides a plausible contemporary delivery system for forms of coordinated harassment that have previously lacked a credible operational model. The existence of a scientifically documented “gang-stalking” phenomenology, independent of the present analysis, therefore serves as indirect corroboration that coordinated targeting of individuals is an operational reality—and that institutional dismissal of such reports functions as an additional barrier to protection and remedy.
B. Vulnerable Populations
Certain populations face elevated risk for algorithmic harassment due to higher visibility, contentious social positioning, or power asymmetries:
Whistleblowers and Activists: Individuals exposing corporate or government misconduct face sophisticated retaliation including online harassment campaigns.[127] The well-resourced nature of institutional defendants enables purchase of advanced harassment services.
Litigation Participants: Civil plaintiffs, particularly in high-stakes commercial or employment cases, report coordinated online harassment designed to pressure settlement or withdrawal.[128]
Journalists: Research documents extensive harassment of journalists, particularly women journalists covering controversial topics.[129] While much harassment involves direct threats, increasingly sophisticated campaigns employ algorithmic amplification and coordination.[130]
Domestic Violence Survivors: Technology-facilitated abuse is increasingly recognized as a major dimension of intimate partner violence.[131] Abusers use location tracking, social media monitoring, and coordinated harassment by proxy to maintain control even after physical separation.[132]
Political Candidates and Officials: Electoral politics involves extensive social media manipulation, with candidates and officials facing coordinated campaigns involving bots, coordinated harassment, and reputation destruction.[133]
Minority Communities: LGBTQ+ individuals, racial minorities, and religious minorities face disproportionate online harassment, with evidence of coordinated campaigns targeting identity-based vulnerabilities.[134]
C. Warning Signs and Red Flags
Several indicators suggest an individual may be experiencing algorithmic targeting rather than coincidental content exposure:
Temporal Convergence: Content appears with suspicious timing—within hours of sealed court hearings, private medical appointments, confidential meetings, or personal events that are not publicly known.[135]
Content Personalization: Recommendations include imagery, language, themes, or references with specific personal meaning inconsistent with general audience content.[136]
Channel Proliferation: Multiple apparently independent channels emerge simultaneously with thematically coordinated content targeting similar vulnerabilities.[137]
Cross-Platform Coordination: Similar content appears across multiple platforms (YouTube, TikTok, Instagram, Facebook) with timing and themes suggesting coordination rather than organic trends.[138]
Escalation Following Complaints: When targets document patterns or file complaints, operations intensify rather than ceasing—indicating operational awareness of target responses.[139]
Persistence Beyond Plausibility: Content continues appearing despite the target changing platforms, keywords, interests, and usage patterns—suggesting active manipulation rather than algorithmic artifact.[140]
Private Information Exposure: Content references information from sealed court documents, confidential medical records, private conversations, or other sources the target has not disclosed publicly.[141]
When multiple indicators converge—especially temporal precision with private event timing—the probability of intentional targeting exceeds reasonable doubt despite individual actors' deniability.
D. Case Study: The Dempsey Matter—Evidence of State-Adjacent Infrastructure
The Dempsey case illustrates state-adjacent characteristics. What began as a routine attempt to clarify minority-shareholder status in a federally linked technology company (“CAGE”) in 2021 escalated into a multi-year pattern of algorithmic harassment, procedural foreclosure, and financial weaponization across several Canadian courts. Crucially, the harassment and institutional irregularities pre-dated and in significant part occasioned the filing of the British Columbia shareholder petition S-220956 in February 2022, rather than emerging as a downstream side-effect of contentious litigation.[197][198]
1. Project-Centred Context and Cross-Venue Foreclosure
-
Initial Involvement: Dempsey's involvement with CAGE arose from a 2016 shareholder agreement and 2020 M&A documentation executed while employed by the CEO of a federally sponsored technology firm with national-defence linkages.[197][199]
-
Litigation Start: The absence of RCMP support amid ongoing "diffuse and disrupt" activities following the close of a 2021 shareholder settlement with the CAGE had compelled Dempsey to file S-220956 in the Supreme Court of British Columbia.[198] See further context concerning surrounding anomalous events at https://www.refugeecanada.net/bci.
-
Pattern of Foreclosure Across Venues: A consistent pattern of foreclosure, rather than adjudication, emerged across multiple courts:
-
BCSC: Neutral third-party discovery was ordered (April 1, 2022) but effectively neutralized through subsequent protective and referral orders. The petition was dismissed before the discovery could occur.[208]
-
BCCA: Leave was denied within minutes, with reasons that appeared to be prepared in advance and that expressly flagged the case as potentially causing “social unrest.”[209]
-
NSSC/NSCA: Stays were refused despite a live record of procedural and billing irregularities. Files were sealed. A false narrative was published that mischaracterized the BC proceedings entirely. Contempt was found without proper analysis, and security-for-costs orders escalated dramatically as sealing orders expanded to remove the probative record from public view.[211][212]
-
SCC: The registry held stay motions and related materials beyond mandatory filing deadlines for over five months while the NS enforcement had ensued under a false pretense. A single justice later dismissed leave and all motions without reasons.[210]
-
-
Synthesis: Each decision, in isolation, might be ordinary error. Viewed together, they trace a coherent pattern: every pathway that might expose underlying irregularities is progressively closed, across multiple courts and provinces, shielding the same underlying interests.
2. The +~9,000% Tariff Billing Scandal and the Requirement for State-Adjacent Assurances and Coordination
The most concrete indicator of a project beyond ordinary commercial litigation is the billing pattern.
-
The Cost Disparity: Across two related proceedings (S-220956 and S-229680), CAGE counsel claimed $376,201.97 in total costs for approximately 737.7 hours of billable time, corresponding to roughly 867 minutes (~14.5 hours) of actual court time across nine short hearings managed by articling students.[215]
-
The Multiple: The expected cost on any reasonable tariff comparison would be in the range of $4,000–$4,500, producing an ~84× multiple over ordinary costs.[215]
-
Certification: CAGE’s lead counsel—a uniformed legal advisor to the Canadian Armed Forces—swore affidavits asserting these costs were "incurred" and "necessary."[200][216] A BCSC master certified essentially 100% of the claimed amounts (November 16, 2023), despite being alerted to the facial unreasonableness and without applying established reasonableness standards.[217][180]
-
Inferences of Assurances: The billing implies that participants had reason to believe before the proceedings unfolded that:
-
Courts would rule in ways that permitted and insulated the scandal, and that appellate recoucse would be foreclosed;
-
Enforcement mechanisms would operate reliably; and
-
Police and professional oversight bodies would decline to intervene or actively suppress complaints.[215][216][217][218]
-
-
Conclusion: Such assurances are not realistically provided by a mid-sized technology company alone. They inexorably suggest a third-party project interest and a controlled institutional environment associated with state-adjacent operations.[179][223][224]
3. Sealing, Registry Conduct, and Rule-of-Law Drift
The procedural handling of the files systematically protected the same interests, departing from ordinary rule-of-law expectations:
-
Broad Sealing: Sealing and protection orders were used broadly, extending full secrecy over entire files where narrow redactions should have sufficed under established legal precedents (Sherman Estate and Dagenais–Mentuck). This includes public information asymmetry published atop the sealed files themselves.[64][208][212]
-
Suppression of Evidence: Key affidavits documenting billing disparity and procedural irregularities were sealed or sidelined. Dempsey was not permitted to present probative evidence at key junctures, and when he did, consideration was foreclosed and omitted from conclusory decisions, and the public record.[211][218]
-
Registry Deviations: In the related proceeding S-229680, registry staff declined to enforce multiple mandatory provisions of the Class Proceedings Act, PD-5, and related rules. This steered the case away from the required case-management framework to a specific chambers justice for summary dismissal.[213][226]
Implication: Systematic, multi-month deviations from mandatory procedure—always in the same direction—suggest staff believed they were working inside a “special file” with externally signalled expectations, rather than an ordinary adversarial proceeding.[213]
4. Police and Oversight Obstruction
The conduct of police and oversight bodies consistently functioned to protect the operation rather than the complainant.
-
Police Non-Investigation: From 2021 onward, Dempsey repeatedly reported AI-assisted harassment and targeting indicators to HRP and RCMP.[204][206] Despite substantial documentation, HRP declined to investigate and, in all cases, generated reports that grossly mischaracterized and/or omitted key facts, and at times used pejorative framing.[204][206]
-
The May 11–13, 2023 Sequence (Emblematic Incident):
-
A minor backyard bonfire mishap on May 11, extinguished by Dempsey within two minutes, was followed within approximately three minutes by two fire trucks and multiple police officers—a response time difficult to reconcile with an unreported event, and of a scope widely disproportionate to the need.[202][203]
-
In the 48-hour window, multiple YouTube channels pushed AI-assisted “prophetic” content referencing imagery closely mirroring the incident and subsequent institutional response.[201][202]
-
On May 13, a mental-health crisis team (two social workers, three officers, and a police paddy wagon) arrived and transported Dempsey involuntarily to the hospital for psychiatric evaluation. A medical resident later concluded the police response had been disproportionate. AI-generated content featuring "Brittany the Intuitive Cosmic Wifey", posted within minutes of the arrest, had described the May 13 intervention in a manner that exactly mirrored the NS Health triage report in text and symbolism.[203][214]
-
-
Oversight Dismissal: Complaints to the RCMP Civilian Review and Complaints Commission and related oversight bodies were dismissed without substantive engagement.[207]
-
The Reframing Tactic: Rather than investigating the operation, institutions reframed the complaint as a mental-health concern, creating a psychiatric record to undermine Dempsey’s credibility in future legal forums—an example of “institutional betrayal.”[97][214]
-
Conclusion: This pattern is consistent with operational guidance that the file is to be contained, not investigated, and with oversight structures institutionally constrained from scrutinizing the underlying operation.[204][206][207]
5. Contemporaneous AI-Assisted Harassment
Dempsey documented hundreds of instances of algorithmically delivered content tightly synchronized with sealed proceedings, private appointments, and other non-public events throughout the litigation.[201][205][219][220]
-
Broad Dataset Characteristics:
-
Content appears within hours or days of sealed hearings and private events not visible to the public.[201][205][219][220]
-
Themes and imagery repeatedly mirror sealed exhibits, confidential medical information, and non-public biographical details, sometimes incorporating visual elements from sealed materials in AI-generated thumbnails.[219]
-
Targeting demonstrates closed-loop adaptation: when consumption patterns are altered to avoid triggers, new channels and content types emerge to restore exposure.[221]
-
-
Inference: Under the circumstantial-evidence framework in R. v. Villaroman, given the volume, temporal precision, and personalization of the targeting, benign algorithmic drift or random YouTube coincidence are no longer reasonable competitors. Deliberate targeting is the only reasonable inference.[161][201][205][219][220][221][222] An estranged family connection to a principal perpetrator further exacerbates this (https://www.refugeecanada.net/family)
6. Synthesis: State-Adjacent Infrastructure in Practice
The combined pillars—cross-venue foreclosure, the 84×-tariff scandal, systematic sealing/registry deviation, police/oversight obstruction, and contemporaneous AI-assisted harassment—are exceedingly difficult to reconcile with:
-
Ordinary judicial error.
-
Aggressive but lawful advocacy.
-
Free-floating cybercrime disconnected from institutional power.
By contrast, they are readily intelligible if the Dempsey matter is viewed as a project-centric, state-adjacent operation where:
-
Intelligence: State-level surveillance infrastructure (or protected access to it) supplies intelligence for sub-24-hour convergence with sealed and private events.[201][202][205][219][220]
-
Manipulation: Platform-level or infrastructure-level access enables cross-platform algorithmic manipulation and adaptive targeting.[219][220][221]
-
Agency Control: Courts and registries operate within a constrained space that forecloses exposure of the underlying irregularities.[208][209][210][211][212][213][226]
-
Containment: Police and oversight bodies function to contain and pathologize the complainant rather than to investigate the operation.[204][206][207][214]
The Dempsey matter is a concrete instantiation of the state-adjacent infrastructure model described in Section III, demonstrating how existing national-security, policing, and platform architectures can be combined in a Canadian context to deliver algorithmic psychological operations against a single litigant while preserving plausible deniability at every institutional layer.[32][110][195] A robust project-centric interest is required for these characteristics, as is outlined at https://www.refugeecanada.net/bci, and https://www.refugeecanada.net/testimony.
VIII. Policy and Legal Recommendations
A. Platform Accountability Measures
Effective response to algorithmic harassment requires platform accountability beyond current self-regulation:
1. Mandatory Targeting Detection Systems
Platforms should implement and operate systems to detect coordinated targeting of individuals:
- Temporal analysis: Flag content appearing across multiple channels with suspicious timing convergence
- Personalization detection: Identify content with characteristics suggesting individual targeting rather than general audience appeal
- Coordination analysis: Detect networks of apparently independent channels with coordinated themes, timing, or targets
- Anomalous recommendation patterns: Flag cases where recommendation patterns diverge from standard personalization algorithms[142]
2. Complaint Investigation Obligations
When credible targeting complaints are filed, platforms should be legally required to:
- Preserve evidence: Maintain recommendation logs, algorithm parameters, and coordination data
- Conduct internal investigation: Deploy technical resources to evaluate claims
- Respond substantively: Provide detailed findings rather than form-letter dismissals
- Take action: Suspend coordinated operations pending investigation; remove content violating policies; deplatform persistent violators[143]
3. Transparency and Audit Requirements
Platform algorithms should be subject to external audit:
- Algorithm documentation: Platforms must maintain detailed documentation of recommendation system operations
- Independent audit: Regular third-party audits evaluating whether systems are exploited for targeting
- Victim access: Targets of potential harassment should receive detailed information about why content was recommended
- Researcher access: Qualified researchers should have controlled access to study manipulation patterns[144]
4. Civil and Criminal Liability Reform
Section 230 immunity should not extend to cases where platforms have actual knowledge of coordinated targeting operations and fail to intervene.[145] Platforms should face:
- Civil liability: Damages when knowing facilitation of harassment operations is proven
- Criminal liability: Prosecution under aiding-and-abetting or conspiracy theories when platforms actively support operations
- Regulatory penalties: Substantial fines for systematic failures to detect and prevent targeting
B. Law Enforcement and Prosecutorial Reform
Criminal justice systems must develop capacity to investigate and prosecute algorithmic harassment:
1. Specialized Units and Training
Law enforcement agencies should establish specialized cybercrime units with:
- Technical expertise in platform architecture, algorithm manipulation, and digital forensics
- Psychological training recognizing harassment severity and trauma impacts
- Pattern analysis capabilities to identify coordinated operations from distributed evidence[146]
2. Updated Criminal Statutes
Criminal harassment laws should be updated to explicitly cover algorithmic targeting:
- Eliminate direct communication requirement: Harassment via algorithm manipulation should be legally equivalent to direct threats
- Pattern evidence provisions: Establish that systematic targeting proven through circumstantial evidence meets criminal intent standards
- Conspiracy and coordination: Criminalize coordination in algorithmic targeting operations even when individual content pieces are facially lawful[147]
3. Victim Support and Protection
Criminal justice systems should provide:
- Immediate protective measures: Emergency orders requiring platforms to suspend targeting operations pending investigation
- Victim advocates: Specialized support for technology-facilitated abuse victims
- Institutional protocols: Prevent psychiatric weaponization by treating complaints as deserving investigation rather than evidence of mental illness[148]
C. Data Broker Regulation
The data broker industry that enables targeting must face stricter regulation:
1. Consent and Transparency Requirements
Comprehensive data protection laws should require:
- Opt-in consent for sensitive data collection (health, location, biometric, communications)
- Transparency: Individuals must be notified when profiles are compiled and sold
- Access rights: Individuals can review and challenge profile accuracy
- Deletion rights: Individuals can demand profile deletion with meaningful enforcement[149]
2. Use Restrictions
Data broker sales should be restricted to prevent harassment:
- Prohibited uses: Ban data sales for targeting, harassment, or surveillance purposes
- Customer vetting: Require verification of legitimate uses before profile access
- Audit trails: Maintain records of who purchases data and for what stated purposes
- Liability: Data brokers face civil and criminal liability when data is used for harassment[150]
D. Civil Litigation Reform
Civil legal framework should be reformed to enable effective victim remedies:
1. Evidentiary Standards
Courts should adopt plaintiff-favorable evidentiary rules:
- Pattern evidence sufficiency: Timing convergence and personalization patterns should satisfy intent and causation requirements
- Expert testimony: Fund expert witnesses for indigent plaintiffs; establish standard methodologies for proving algorithmic manipulation
- Reverse burden: Once prima facie targeting case is established, defendants must provide innocent explanations[151]
2. Discovery and Procedure
Civil procedure rules should facilitate evidence access:
- Mandatory platform disclosure: Court orders requiring recommendation logs, algorithm documentation, and coordination analysis
- Protective orders: Prevent operational escalation during litigation
- Expedited procedures: Fast-track targeting cases to prevent years-long harm while awaiting trial[152]
3. Damages and Remedies
Courts should recognize full scope of algorithmic harassment harms:
- Neurobiological injury: Damages for permanent brain changes documented on neuroimaging
- Complex PTSD: Recognition that chronic targeting produces severe, lasting psychological injury
- Social death: Compensation for reputation destruction and network collapse
- Cognitive incapacitation: Damages for lasting impairment of executive function and democratic participation capacity
- Punitive damages: Substantial punitive awards to deter future operations[153]
E. International Cooperation
Algorithmic harassment operations often involve actors and infrastructure across multiple jurisdictions, requiring international cooperation:
1. Mutual Legal Assistance Treaties
Update MLATs to cover algorithmic harassment:
- Recognize targeting as serious crime warranting cooperation
- Streamline evidence sharing across jurisdictions
- Enable coordinated investigation of distributed operations[154]
2. Platform Regulation Harmonization
International agreements should establish minimum platform accountability standards:
- Universal targeting detection requirements
- Standardized complaint investigation procedures
- Cross-border enforcement cooperation
- Coordinated sanctions for systematic violators[155]
IX. Conclusion: The Urgency of Response
This analysis establishes several critical findings about coordinated algorithmic harassment campaigns:
Operational Reality: Algorithmic targeting is not theoretical speculation but documented operational practice. Evidence from multiple domains—technical platform research, organized cybercrime investigations, documented harassment patterns, psychological trauma studies, and legal case analysis—converges to demonstrate that sophisticated actors are exploiting recommendation algorithms to systematically target individuals with psychologically harmful content.
Technical Feasibility: Modern platform architectures combine detailed behavioral profiling, engagement-optimized recommendation algorithms, and closed-loop adaptation in ways that enable precision psychological operations. The technical barriers to algorithmic targeting have been eliminated—necessary capabilities are commercially available through crime-as-a-service markets or purchasable by well-resourced institutional actors.
Organizational Infrastructure: Multiple actor categories—organized crime groups, state-sponsored operators, corporate entities, and networked harassment communities—possess both motivation and capability to conduct algorithmic targeting. The infrastructure supporting these operations has grown rapidly, including bot networks, data brokers, manipulation services, and harassment-for-hire offerings.
Severe Psychological Harm: Algorithmic targeting produces documented severe psychological injury through mechanisms including chronic inescapable threat (driving neurobiological damage), personalized vulnerability exploitation, learned helplessness engineering, and institutional betrayal. Documented impacts parallel recognized torture outcomes—CPTSD, identity fragmentation, cognitive incapacitation, and permanent functional impairment. The harm severity often exceeds traditional harassment precisely because algorithmic operations achieve ambient inescapability, closed-loop escalation, and deniability-driven isolation that direct threats cannot.
Legal and Institutional Inadequacy: Current legal frameworks—criminal harassment statutes, civil tort law, platform self-regulation, and data protection regimes—prove systematically inadequate to address algorithmic targeting. Evidentiary barriers, procedural obstacles, platform immunity, and institutional dismissal of complaints create comprehensive foreclosure of remedies. Targets face severe ongoing harm without effective means of stopping operations or obtaining accountability.
Growing Prevalence: Multiple indicators suggest algorithmic harassment is increasing in frequency and sophistication—expansion of manipulation service industries, rising online harassment rates, growing data broker markets, and documented case patterns across multiple contexts (litigation, whistleblowing, family disputes, political opposition). While precise quantification remains difficult due to reporting barriers and recognition gaps, available evidence indicates a problem of substantial and growing scale.
The Imperative for Action
The convergence of these findings compels urgent policy response. When technology-mediated operations inflict severe psychological injury through systematic exploitation of platform architectures, and when current legal and institutional frameworks provide no effective remedy or accountability, the result is a human rights crisis hidden by technological complexity and operational deniability.
Several normative conclusions follow:
First, recognition: Algorithmic harassment must be recognized as a distinct and serious form of abuse—not mere "online nastiness" or "algorithm problems" but sophisticated psychological operations capable of inflicting permanent harm. Legal systems, mental health professionals, and institutions must develop frameworks for identifying and responding to these operations.
Second, accountability: All actors enabling algorithmic targeting must face meaningful consequences. Platforms facilitating operations through willful blindness or inadequate detection systems should face civil and criminal liability. Data brokers enabling targeting through comprehensive profile sales should be strictly regulated. Operators conducting campaigns should face prosecution as harassment or torture perpetrators. Current impunity cannot continue.
Third, victim support: Survivors of algorithmic harassment campaigns require specialized support including trauma treatment, reputation reconstruction, economic compensation, and legal assistance. The current response—institutional dismissal, psychiatric pathologization, and complete remedy foreclosure—is ethically and practically unacceptable.
Fourth, prevention: Platform architectures must be redesigned to prevent exploitation for targeting purposes. This requires mandatory detection systems, investigation obligations, transparency mechanisms, and structural changes to reduce recommendation systems' manipulability. The current business model—engagement maximization without accountability for harms—cannot persist.
Fifth, research: Substantial additional research is needed to fully quantify prevalence, document impacts, evaluate countermeasures, and develop evidence-based interventions. This requires researcher access to platform data, funding for longitudinal studies of targeted individuals, and interdisciplinary collaboration across computer science, psychology, law, and human rights fields.
Final Assessment
Coordinated algorithmic harassment represents a profound challenge to human dignity, psychological integrity, and democratic participation. When platform systems designed to optimize engagement can be weaponized to inflict torture-level psychological harm while maintaining plausible deniability for all actors involved, we face a technological capability profoundly incompatible with human rights commitments and civilized society.
The evidence reviewed in this paper demonstrates that this threat is not hypothetical but operational, not isolated but systematic, not trivial but severe. Individuals are experiencing permanent neurobiological damage, psychological disintegration, social annihilation, and cognitive incapacitation through algorithmic operations that current legal and institutional frameworks cannot adequately address.
The response required is categorical, urgent, and comprehensive: recognition of algorithmic targeting as serious abuse, meaningful accountability for enabling platforms and operating actors, robust support for survivors, preventive platform regulation, and sustained research investment. Anything less leaves citizens vulnerable to psychological destruction through invisible technological warfare—a regime incompatible with democracy, human rights, or human dignity.
The algorithms are not neutral. The operations are not coincidental. The harm is not imaginary. Recognition of these realities must be the foundation for legal, policy, and institutional reform adequate to protect human psychological integrity in an age of algorithmic omnipresence.
References
[1] Citron, D.K. (2014). Hate Crimes in Cyberspace. Harvard University Press; Marwick, A.E. (2021). "Morally Motivated Networked Harassment as Normative Reinforcement." Social Media + Society, 7(2), 1-12.
[2] Lenhart, A., et al. (2016). "Online Harassment, Digital Abuse, and Cyberstalking in America." Data & Society Research Institute.
[3] Covington, P., et al. (2016). "Deep Neural Networks for YouTube Recommendations." Proceedings of the 10th ACM Conference on Recommender Systems, 191-198.
[4] Gomez-Uribe, C.A., & Hunt, N. (2016). "The Netflix Recommender System: Algorithms, Business Value, and Innovation." ACM Transactions on Management Information Systems, 6(4), 1-19.
[5] Kosinski, M., et al. (2013). "Private traits and attributes are predictable from digital records of human behavior." PNAS, 110(15), 5802-5805.
[6] Federal Trade Commission (2014). "Data Brokers: A Call for Transparency and Accountability." FTC Report.
[7] Ribeiro, M.H., et al. (2020). "Auditing radicalization pathways on YouTube." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 131-141.
[8] Matz, S.C., et al. (2017). "Psychological targeting as an effective approach to digital mass persuasion." PNAS, 114(48), 12714-12719.
[9] Bradshaw, S., & Howard, P.N. (2019). "The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation." Oxford Internet Institute.
[10] Davidson, J., et al. (2010). "The YouTube video recommendation system." Proceedings of the Fourth ACM Conference on Recommender Systems, 293-296.
[11] Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Edge of Power. PublicAffairs.
[12] Angwin, J. (2014). Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance. Times Books.
[13] Youyou, W., et al. (2015). "Computer-based personality judgments are more accurate than those made by humans." PNAS, 112(4), 1036-1040.
[14] Ricci, F., et al. (2015). Recommender Systems Handbook (2nd ed.). Springer.
[15] He, X., et al. (2017). "Neural Collaborative Filtering." Proceedings of the 26th International Conference on World Wide Web, 173-182.
[16] Beam, M.A. (2014). "Automating the News: How Personalized News Recommender System Design Choices Impact News Reception." Communication Research, 41(8), 1019-1041.
[17] Srnicek, N. (2017). Platform Capitalism. Polity Press.
[18] Hinds, J., & Joinson, A.N. (2019). "Human and computer personality prediction from digital footprints." Current Directions in Psychological Science, 28(2), 204-211.
[19] Matz, S.C., & Netzer, O. (2017). "Using Big Data as a window into consumers' psychology." Current Opinion in Behavioral Sciences, 18, 7-12.
[20] Tufekci, Z. (2018). "YouTube, the Great Radicalizer." The New York Times, March 10, 2018.
[21] Alfano, M., et al. (2021). "Technologically scaffolded atypical cognition: the case of YouTube's recommender system." Synthese, 199(3-4), 835-858.
[22] Haroon, M., et al. (2022). "Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles." ACM Transactions on Recommender Systems, 1(1), 1-36.
[23] Cyphers, B., & Gebhart, G. (2020). "Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance." Electronic Frontier Foundation.
[24] Schneier, B. (2015). Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World. W.W. Norton.
[25] Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
[26] Eslami, M., et al. (2015). "I always assumed that I wasn't really that close to [her]: Reasoning about Invisible Algorithms in News Feeds." Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 153-162.
[27] O'Callaghan, D., et al. (2015). "Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems." Social Science Computer Review, 33(4), 459-478.
[28] Ferrara, E., et al. (2016). "The Rise of Social Bots." Communications of the ACM, 59(7), 96-104.
[29] Varol, O., et al. (2017). "Online Human-Bot Interactions: Detection, Estimation, and Characterization." Proceedings of the 11th International AAAI Conference on Web and Social Media, 280-289.
[30] Nguyen, T.T., et al. (2014). "Exploring the Filter Bubble: The Effect of Using Recommender Systems on Content Diversity." Proceedings of the 23rd International Conference on World Wide Web, 677-686.
[31] Munn, L. (2019). "Alt-right pipeline: Individual journeys to extremism online." First Monday, 24(6).
[32] Melzer, N. (2020). "Psychological torture and ill-treatment." UN Doc. A/HRC/43/49 (20 March 2020).
[33] Lusthaus, J. (2018). Industry of Anonymity: Inside the Business of Cybercrime. Harvard University Press.
[34] BAE Systems (2024). "Research reveals the real perpetrators of digital crime." https://www.baesystems.com/en/article/bae-systems-research-reveals-the-real-perpetrators-of-digital-crime
[35] Broadhurst, R., et al. (2014). "Organizations and Cyber crime: An Analysis of the Nature of Groups engaged in Cyber Crime." International Journal of Cyber Criminology, 8(1), 1-20.
[36] Cresci, S., et al. (2020). "A Decade of Social Bot Detection." Communications of the ACM, 63(10), 72-83.
[37] Snyder, P., et al. (2017). "15 Minutes of Unwanted Fame: Detecting and Characterizing Doxing." Proceedings of the 2017 Internet Measurement Conference, 432-444.
[38] Solove, D.J. (2007). The Future of Reputation: Gossip, Rumor, and Privacy on the Internet. Yale University Press.
[39] Woolley, S.C., & Howard, P.N. (2018). Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press.
[40] Hoofnagle, C.J., et al. (2019). "The European Union general data protection regulation: what it is and what it means." Information & Communications Technology Law, 28(1), 65-98.
[41] Hutchings, A., & Clayton, R. (2016). "Exploring the provision of online booter services." Deviant Behavior, 37(10), 1163-1178.
[42] U.S. Department of Defense (2010). "Joint Publication 3-13.2: Psychological Operations." Joint Chiefs of Staff.
[43] Badawy, A., et al. (2018). "Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign." Proceedings of the 2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining, 258-265.
[44] Helmus, T.C., et al. (2018). "Russian Social Media Influence: Understanding Russian Propaganda in Eastern Europe." RAND Corporation.
[45] Rid, T. (2020). Active Measures: The Secret History of Disinformation and Political Warfare. Farrar, Straus and Giroux.
[46] Paul, C., & Matthews, M. (2016). "The Russian 'Firehose of Falsehood' Propaganda Model." RAND Corporation Perspective, PE-198-OSD.
[47] Solove, D.J. (2008). Understanding Privacy. Harvard University Press.
[48] Powell, A., et al. (2018). Digital Criminology: Crime and Justice in Digital Society. Routledge.
[49] Henry, N., & Powell, A. (2018). "Technology-Facilitated Sexual Violence: A Literature Review of Empirical Research." Trauma, Violence, & Abuse, 19(2), 195-208.
[50] Pring, G.W., & Canan, P. (1996). SLAPPs: Getting Sued for Speaking Out. Temple University Press.
[51] Marwick, A.E. (2021). "Morally Motivated Networked Harassment as Normative Reinforcement." Social Media + Society, 7(2), 1-12.
[52] Blackwell, L., et al. (2018). "Classification and Its Consequences for Online Harassment: Design Insights from HeartMob." Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1-19.
[53] Amnesty International (2018). "Toxic Twitter: A Toxic Place for Women." https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-1/
[54] Beskow, D.M., & Carley, K.M. (2020). "Bot-ivistm: Assessing information manipulation in social media using network analytics." In Emerging Research Challenges and Opportunities in Computational Social Network Analysis and Mining (pp. 19-42). Springer.
[55] Ledwich, M., & Zaitsev, A. (2020). "Algorithmic extremism: Examining YouTube's rabbit hole of radicalization." First Monday, 25(3).
[56] Varol, O., et al. (2017). "Online Human-Bot Interactions: Detection, Estimation, and Characterization." Proceedings of the 11th International AAAI Conference on Web and Social Media, 280-289.
[57] Shao, C., et al. (2018). "The spread of low-credibility content by social bots." Nature Communications, 9(1), 4787.
[58] Stella, M., et al. (2018). "Bots increase exposure to negative and inflammatory content in online social systems." PNAS, 115(49), 12435-12440.
[59] Ribeiro, M.H., et al. (2020). "Auditing radicalization pathways on YouTube." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 131-141.
[60] Duggan, M. (2017). "Online Harassment 2017." Pew Research Center.
[61] Citron, D.K. (2014). Hate Crimes in Cyberspace. Harvard University Press.
[62] Federal Trade Commission (2014). "Data Brokers: A Call for Transparency and Accountability." FTC Report.
[63] Angwin, J. (2014). Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance. Times Books.
[64] Melzer, N. (2020). "Psychological torture and ill-treatment." UN Doc. A/HRC/43/49.
[65] Near v. Minnesota, 283 U.S. 697 (1931); Garcetti v. Ceballos, 547 U.S. 410 (2006).
[66] Stark, E. (2007). Coercive Control: How Men Entrap Women in Personal Life. Oxford University Press.
[67] Duggan, M. (2017). "Online Harassment 2017." Pew Research Center.
[68] Lenhart, A., et al. (2016). "Online Harassment, Digital Abuse, and Cyberstalking in America." Data & Society Research Institute.
[69] Amnesty International (2018). "Toxic Twitter: A Toxic Place for Women."
[70] Chen, G.M., et al. (2020). "Understanding Twitter Geolocation for Cyberstalking Applications." Cyberpsychology, Behavior, and Social Networking, 23(3), 188-194.
[71] Bradshaw, S., & Howard, P.N. (2019). "The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation." Oxford Internet Institute.
[72] Hussein, E., et al. (2020). "Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube." Proceedings of the ACM on Human-Computer Interaction, 4(CSCW1), 1-27.
[73] Duggan, M. (2017). "Online Harassment 2017." Pew Research Center.
[74] Lenhart, A., et al. (2016). "Online Harassment, Digital Abuse, and Cyberstalking in America." Data & Society Research Institute.
[75] McEwen, B.S. (2007). "Physiology and neurobiology of stress and adaptation: Central role of the brain." Physiological Reviews, 87(3), 873-904.
[76] Yehuda, R., et al. (1991). "Lymphocyte glucocorticoid receptor number in posttraumatic stress disorder." American Journal of Psychiatry, 148, 499-504.
[77] Miller, G.E., et al. (2007). "If it goes up, must it come down? Chronic stress and the hypothalamic-pituitary-adrenocortical axis in humans." Psychological Bulletin, 133(1), 25-45.
[78] Smith, M.E. (2005). "Bilateral hippocampal volume reduction in adults with post-traumatic stress disorder: A meta-analysis of structural MRI studies." Hippocampus, 15(6), 798-807.
[79] Schiepek, G., et al. (2016). "Analysis of the Metabolic and Structural Brain Changes in Patients With Torture-Related Post-Traumatic Stress Disorder (TR-PTSD) Using ¹⁸F-FDG PET and MRI." Medicine, 95(15).
[80] Koch, S.B.J., et al. (2016). "Aberrant resting-state brain activity in posttraumatic stress disorder: A meta-analysis and systematic review." Depression and Anxiety, 33(7), 592-605.
[81] Lupien, S.J., et al. (2009). "Effects of stress throughout the lifespan on the brain, behaviour and cognition." Nature Reviews Neuroscience, 10, 434-445.
[82] Nicholson, A.A., et al. (2020). "Classifying heterogeneous presentations of PTSD via the default mode, central executive, and salience networks with machine learning." NeuroImage: Clinical, 27, 102262.
[83] Liddell, B.J., et al. (2022). "Torture exposure and the functional brain: investigating disruptions to intrinsic network connectivity using resting state fMRI." Translational Psychiatry, 12, 35.
[84] Teicher, M.H., & Samson, J.A. (2016). "Annual Research Review: Enduring neurobiological effects of childhood abuse and neglect." Journal of Child Psychology and Psychiatry, 57(3), 241-266.
[85] Cloitre, M., et al. (2013). "The International Trauma Questionnaire: Development of a self-report measure of ICD-11 PTSD and complex PTSD." Acta Psychiatrica Scandinavica, 127(5), 351-362.
[86] Dvir, Y., et al. (2014). "Childhood maltreatment, emotional dysregulation, and psychiatric comorbidities." Harvard Review of Psychiatry, 22(3), 149-161.
[87] Herman, J.L. (2015). Trauma and Recovery: The Aftermath of Violence—From Domestic Abuse to Political Terror (Revised Edition). Basic Books.
[88] Charuvastra, A., & Cloitre, M. (2008). "Social bonds and posttraumatic stress disorder." Annual Review of Psychology, 59, 301-328.
[89] Brewin, C.R., et al. (2017). "A review of current evidence regarding the ICD-11 proposals for diagnosing PTSD and complex PTSD." Clinical Psychology Review, 58, 1-15.
[90] Freyd, J.J. (1996). Betrayal Trauma: The Logic of Forgetting Childhood Abuse. Harvard University Press.
[91] Gilbar, O., et al. (2018). "ICD-11 complex PTSD: U.S. National prevalence of PTSD and complex PTSD among adults." Psychological Trauma: Theory, Research, Practice, and Policy, 10(3), 299-305.
[92] Seligman, M.E.P. (1975). Helplessness: On Depression, Development, and Death. W.H. Freeman.
[93] Maier, S.F., & Seligman, M.E.P. (1976). "Learned helplessness: Theory and evidence." Journal of Experimental Psychology: General, 105(1), 3-46.
[94] Abramson, L.Y., et al. (1978). "Learned helplessness in humans: Critique and reformulation." Journal of Abnormal Psychology, 87(1), 49-74.
[95] Amat, J., et al. (2005). "Medial prefrontal cortex determines how stressor controllability affects behavior and dorsal raphe nucleus." Nature Neuroscience, 8, 365-371.
[96] Başoğlu, M., et al. (2007). "Torture vs other cruel, inhuman, and degrading treatment: is the distinction real or apparent?" Archives of General Psychiatry, 64(3), 277-285.
[97] Smith, C.P., & Freyd, J.J. (2014). "Institutional betrayal." American Psychologist, 69(6), 575-587.
[98] Scarry, E. (1985). The Body in Pain: The Making and Unmaking of the World. Oxford University Press.
[99] Reisner, S. (2011). "Psychologists and military interrogators rethink the psychology of torture." In Interrogations, Forced Feedings, and the Role of Health Professionals: New Perspectives on International Human Rights, Humanitarian Law, and Ethics (pp. 19-40). Routledge.
[100] Silove, D., et al. (2002). "Torture as a predictor of PTSD symptoms in refugees." Journal of Traumatic Stress, 15(2), 131-137.
[101] Platt, M.G., & Freyd, J.J. (2015). "Betray my trust, shame on me: Shame, dissociation, fear, and betrayal trauma." Psychological Trauma: Theory, Research, Practice, and Policy, 7(4), 398-404.
[102] Steel, Z., et al. (2009). "Association of torture and other potentially traumatic events with mental health outcomes among populations exposed to mass conflict and displacement." JAMA, 302(5), 537-549.
[103] 18 U.S.C. § 2261A (Interstate stalking); Model Penal Code § 250.4 (Harassment).
[104] Doe v. Backpage.com, 817 F.3d 12 (1st Cir. 2016).
[105] Citron, D.K. (2014). Hate Crimes in Cyberspace. Harvard University Press, pp. 187-215.
[106] 47 U.S.C. § 230(c)(1) (Section 230 of the Communications Decency Act).
[107] Klonick, K. (2018). "The New Governors: The People, Rules, and Processes Governing Online Speech." Harvard Law Review, 131(6), 1598-1670.
[108] Regulation (EU) 2016/679 (General Data Protection Regulation); Cal. Civ. Code §§ 1798.100-1798.199 (California Consumer Privacy Act).
[109] Solove, D.J. (2008). Understanding Privacy. Harvard University Press.
[110] Melzer, N. (2020). "Psychological torture and ill-treatment." UN Doc. A/HRC/43/49 (20 March 2020). Available at: https://undocs.org/A/HRC/43/49
[111] Ireland v. United Kingdom (1978). ECHR Application No. 5310/71 (European Court of Human Rights recognizing psychological torture).
[112] Melzer, N. (2020). "Psychological torture and ill-treatment." UN Doc. A/HRC/43/49, paragraphs 22-27.
[113] United Nations (1984). "Convention Against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment." UN Doc. A/RES/39/46, Article 1.
[114] Melzer, N. (2020). "Psychological torture and ill-treatment." UN Doc. A/HRC/43/49, paragraphs 42-48.
[115] Melzer, N. (2021). "Torture and other cruel, inhuman or degrading treatment or punishment." UN Doc. A/76/172 (16 July 2021). Available at: https://undocs.org/A/76/172
[116] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993) (establishing standards for expert testimony admissibility).
[117] Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
[118] U.S. House of Representatives (2020). "Investigation of Competition in Digital Markets: Majority Staff Report and Recommendations." Subcommittee on Antitrust, Commercial and Administrative Law of the Committee on the Judiciary.
[119] Yahoo! Inc. v. La Ligue Contre Le Racisme Et L'Antisemitisme, 433 F.3d 1199 (9th Cir. 2006) (en banc) (addressing jurisdictional issues in international internet cases).
[120] Blencoe v. British Columbia (Human Rights Commission), [2000] 2 S.C.R. 307, 2000 SCC 44 (Supreme Court of Canada recognizing psychological harm from procedural delay under Charter § 7).
[121] Bradshaw, S., & Howard, P.N. (2019). "The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation." Oxford Internet Institute, University of Oxford.
[122] West, S.M. (2019). "Data Capitalism: Redefining the Logics of Surveillance and Privacy." Business & Society, 58(1), 20-41.
[123] Federal Trade Commission (2014). "Data Brokers: A Call for Transparency and Accountability." FTC Report, May 2014.
[124] Van Wegberg, R., et al. (2018). "Plug and Prey? Measuring the Commoditization of Cybercrime via Online Anonymous Markets." Proceedings of the 27th USENIX Security Symposium, 1009-1026.
[125] Duggan, M. (2017). "Online Harassment 2017." Pew Research Center, July 11, 2017 (comparison to 2014 baseline study showing increased severity).
[126] Lenhart, A., et al. (2016). "Online Harassment, Digital Abuse, and Cyberstalking in America." Data & Society Research Institute.
[127] Garcetti v. Ceballos, 547 U.S. 410 (2006) (addressing First Amendment protections for public employee speech); Near v. Minnesota, 283 U.S. 697 (1931) (establishing prior restraint doctrine).
[128] Powell, A., et al. (2018). Digital Criminology: Crime and Justice in Digital Society. Routledge, pp. 143-167.
[129] Amnesty International (2018). "Toxic Twitter: A Toxic Place for Women." March 21, 2018. Available at: https://www.amnesty.org/en/latest/research/2018/03/online-violence-against-women-chapter-1/
[130] Chen, G.M., et al. (2020). "Understanding Twitter Geolocation for Cyberstalking Applications." Cyberpsychology, Behavior, and Social Networking, 23(3), 188-194.
[131] Stark, E. (2007). Coercive Control: How Men Entrap Women in Personal Life. Oxford University Press.
[132] Henry, N., & Powell, A. (2018). "Technology-Facilitated Sexual Violence: A Literature Review of Empirical Research." Trauma, Violence, & Abuse, 19(2), 195-208.
[133] Singer, P.W., & Brooking, E.T. (2018). LikeWar: The Weaponization of Social Media. Eamon Dolan Books/Houghton Mifflin Harcourt.
[134] Citron, D.K. (2014). Hate Crimes in Cyberspace. Harvard University Press, pp. 23-58 (documenting disproportionate targeting of minority communities).
[135] Case pattern documentation; specific citations withheld to protect victim confidentiality consistent with research ethics protocols and privacy considerations.
[136] Kosinski, M., Stillwell, D., & Graepel, T. (2013). "Private traits and attributes are predictable from digital records of human behavior." Proceedings of the National Academy of Sciences, 110(15), 5802-5805.
[137] Cresci, S., Di Pietro, R., Petrocchi, M., Spognardi, A., & Tesconi, M. (2020). "A Decade of Social Bot Detection." Communications of the ACM, 63(10), 72-83.
[138] Bradshaw, S., & Howard, P.N. (2019). "The Global Disinformation Order: 2019 Global Inventory of Organised Social Media Manipulation." Oxford Internet Institute (documenting cross-platform coordination).
[139] Marwick, A.E. (2021). "Morally Motivated Networked Harassment as Normative Reinforcement." Social Media + Society, 7(2), 1-12.
[140] Ribeiro, M.H., Ottoni, R., West, R., Almeida, V.A.F., & Meira, W., Jr. (2020). "Auditing radicalization pathways on YouTube." Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 131-141.
[141] Federal Trade Commission (2014). "Data Brokers: A Call for Transparency and Accountability." FTC Report (documenting availability of private information including sealed records through data broker channels).
[142] Gorwa, R., Binns, R., & Katzenbach, C. (2020). "Algorithmic content moderation: Technical and political challenges in the automation of platform governance." Big Data & Society, 7(1), 1-15.
[143] Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
[144] Suzor, N. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press.
[145] Citron, D.K., & Wittes, B. (2017). "The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity." Fordham Law Review, 86, 401-423.
[146] European Commission (2017). "Antitrust: Commission fines Google €2.42 billion for abusing dominance as search engine by giving illegal advantage to own comparison shopping service." Press Release IP/17/1784, June 27, 2017.
[147] Model Penal Code § 250.4 (American Law Institute proposed amendments addressing cyberstalking and technology-facilitated harassment).
[148] Roberts, S.T. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
[149] Regulation (EU) 2016/679 of the European Parliament and of the Council (General Data Protection Regulation), Articles 6-7 (lawful processing), 15-17 (data subject rights including access, rectification, and erasure).
[150] Federal Trade Commission (2014). "Data Brokers: A Call for Transparency and Accountability." FTC Report (recommendations section), pp. 47-63.
[151] Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579 (1993); Kumho Tire Co. v. Carmichael, 526 U.S. 137 (1999) (extending Daubert standards to technical expert testimony).
[152] Federal Rules of Civil Procedure, Rule 26(b)(1) (scope of discovery); proposed amendments for algorithmic harassment cases addressing algorithm audit and platform data access.
[153] Restatement (Second) of Torts § 46 (1965) (Outrageous Conduct Causing Severe Emotional Distress—intentional infliction of emotional distress).
[154] Council of Europe (2001). "Convention on Cybercrime," European Treaty Series No. 185, Budapest (addressing international cooperation on cybercrime investigation and prosecution).
[155] Organisation for Economic Co-operation and Development (2011). "The Role of Internet Intermediaries in Advancing Public Policy Objectives." OECD Digital Economy Papers, No. 197, OECD Publishing, Paris.
[156] Sheridan, L., James, D.V., & Roth, J. (2020). "The Phenomenology of Group Stalking ('Gang-Stalking'): A Content Analysis of Subjective Experiences." International Journal of Environmental Research and Public Health, 17(7), 2506. https://doi.org/10.3390/ijerph17072506. Study analyzing first-person accounts to document core experiential patterns, concluding: "These findings constitute a potent reason why gang-stalking should be regarded as an important subject for study" and noting the need for research methodologies that do not incorporate pre-conceived dismissive assumptions.
[157] Criminal Code, R.S.C. 1985, c. C-46, s. 264 (Criminal Harassment).
[158] Canadian Resource Centre for Victims of Crime (2022). "Cyberstalking." Available at: https://crcvc.ca/wp-content/uploads/2021/09/Cyberstalking-_DISCLAIMER_Revised-Aug-2022_FINAL.pdf
[159] Blencoe v. British Columbia (Human Rights Commission), [2000] 2 S.C.R. 307, 2000 SCC 44 (recognizing security of person includes protection against serious state-imposed psychological harm).
[160] Canadian Charter of Rights and Freedoms, s. 7, Part I of the Constitution Act, 1982, being Schedule B to the Canada Act 1982 (UK), 1982, c. 11.
[161] R. v. Villaroman, 2016 SCC 33, [2016] 1 S.C.R. 1000 (Supreme Court of Canada guidance on circumstantial evidence and the line between reasonable inference and speculation).
[162] Libman v. The Queen, [1985] 2 S.C.R. 178 (Supreme Court of Canada adopting the “real and substantial link” test for offences with extraterritorial elements and emphasizing locally felt harms).
[163] Society of Composers, Authors and Music Publishers of Canada v. Canadian Assn. of Internet Providers, 2004 SCC 45, [2004] 2 S.C.R. 427 (SOCAN v. CAIP) (Supreme Court of Canada applying an effects-based, real-and-substantial-connection analysis to Internet communications causing harm in Canada).
[164] Communications Security Establishment (CSE) (2017, 2019, 2021). Cyber Threats to Canada’s Democratic Process (original report and 2019/2021 updates). Government of Canada.
[165] Department of Justice Canada (2017). Charter Statement – Bill C-59: An Act respecting national security matters.
[166] Citizen Lab (2017). Analysis of the Communications Security Establishment Act (Part 3 of Bill C-59). (Craig Forcese et al.).
[167] Canadian Civil Liberties Association (CCLA) (2017). Mass Surveillance and Bulk Collection in Bill C-59.
[168] Parsons, C., et al. (2020). To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada. Citizen Lab, University of Toronto; see also “Algorithmic Policing in Canada Explained.”
[169] Office of the Privacy Commissioner of Canada (2021). Special Report to Parliament on the RCMP’s Use of Clearview AI; see also OPC news release “RCMP’s use of Clearview AI’s facial recognition technology” and related evidence before the House of Commons Standing Committee on Access to Information, Privacy and Ethics (ETHI).
[170] Khoo, C., Robertson, K., & Deibert, R. (2019). Installing Fear: A Canadian Legal and Policy Analysis of Using, Developing, and Selling Smartphone Spyware and Stalkerware Applications. Citizen Lab.
[171] Citizen Lab (2025). A First Look at Paragon’s Proliferating Spyware Operations; see also reporting on Canadian police services’ adoption of mercenary spyware and analysis in Policy Options.
[172] Citizen Lab (2025). A Preliminary Analysis of Bill C-2: Cross-Border Data-Sharing and Its Implications for Canadian Privacy and Human Rights.
[173] Helmus, T. C., et al. (2018). Russian Social Media Influence: Understanding Russian Propaganda in Eastern Europe. RAND Corporation; Huang, Z. Y. (2022). “China's Tech Platforms and Governance in Cyberspace.” Journal of Contemporary China, 31(135), 420–435.
[174] Case documentation: Halifax Regional Police and RCMP complaints and correspondence, 2021–2025. Available at refugeecanada.net/hrp and refugeecanada.net/crcc.
[175] Lusthaus, J. (2018). Industry of Anonymity: Inside the Business of Cybercrime. Harvard University Press, pp. 89–112.
[176] Case documentation: multi-jurisdictional litigation record (BCSC, BCCA, NSSC, NSCA, SCC) 2022–2025. Available at refugeecanada.net/litigation and subpages.
[177] Valente v. The Queen, [1985] 2 S.C.R. 673; Ocean Port Hotel Ltd. v. British Columbia (General Manager, Liquor Control and Licensing Branch), 2001 SCC 52, [2001] 2 S.C.R. 781.
[178] Case documentation: BCSC file S-229680 registry correspondence (Jan–Mar 2023) regarding refusal to enforce CPA and PD-5 case-management requirements.
[179] Affidavit evidence and cost certificates in BCSC S-220956 and S-229680, including MacKinnon Affidavits (17 Oct 2023) claiming 737.7 billable hours for ~14.5 hours of court time in routine matters managed by articling students.
[180] Beals v. Saldanha, 2003 SCC 72, [2003] 3 S.C.R. 416 (esp. paras. 73, 218, 220, 243, 265).
[181] MacKinnon, E. Affidavits sworn 17 Oct 2023 (BCSC S-220956; S-229680) and Law Society of BC professional profile identifying role as uniformed CAF legal advisor.
[182] Greenwald, G. (2014). No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State. Metropolitan Books.
[183] Landau, S. (2017). Listening In: Cybersecurity in an Insecure Age. Yale University Press.
[184] Bauman, Z., et al. (2014). “After Snowden: Rethinking the Impact of Surveillance.” International Political Sociology, 8(2), 121–144.
[185] Lyon, D. (2015). Surveillance After Snowden. Polity Press.
[186] U.S. Department of Defense (2010). Joint Publication 3-13.2: Psychological Operations; Singer, P. W., & Brooking, E. T. (2018). LikeWar: The Weaponization of Social Media. Houghton Mifflin Harcourt.
[187] Arquilla, J., & Ronfeldt, D. (1999). The Emergence of Noopolitik: Toward an American Information Strategy. RAND Corporation.
[188] Cadwalladr, C., & Graham-Harrison, E. (2018). “Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach.” The Guardian, 17 March 2018.
[189] NATO Strategic Communications Centre of Excellence (2021). Cognitive Warfare. Innovation Hub Report.
[190] Kania, E. B. (2016). “The PLA's Latest Strategic Thinking on the Three Warfares.” China Brief, 16(13).
[191] Rid, T. (2020). Active Measures: The Secret History of Disinformation and Political Warfare. Farrar, Straus and Giroux, pp. 387–412.
[192] Weiner, T. (2007). Legacy of Ashes: The History of the CIA. Doubleday; Scahill, J. (2007). Blackwater: The Rise of the World's Most Powerful Mercenary Army. Nation Books.
[193] R. v. Tobiass, [1997] 3 S.C.R. 391 (esp. paras. 91, 110).
[194] Forcese, C., & Roach, K. (2015). False Security: The Radicalization of Canadian Anti-Terrorism. Irwin Law.
[195] Melzer, N. (2020). “Psychological Torture and Ill-Treatment.” UN Doc. A/HRC/43/49.
[196] Suzor, N. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press, pp. 213–247.
[197] Dempsey, N. Shareholder agreement (27 July 2016) and M&A documentation (18 Sept 2020), redacted; background narrative at refugeecanada.net/litigation / refugeecanada.net/shareholder
[198] Dempsey v. “CAGE", BCSC S-220956 (petition filed February 2022); affidavit materials re Central Securities Register anomalies and FY2020 derecognition policy (refugeecanada.net/litigation / refugeecanada.net/shareholder)
[199] Public corporate filings and federal contract records documenting CAGE as a federally sponsored technology-development company with defence-sector relationships (cited in court materials).
[200] Law Society of British Columbia profile and BCSC affidavits identifying Emily MacKinnon as uniformed legal officer with the Canadian Armed Forces.
[201] Visual evidence of temporal convergence between online content and sealed proceedings (screenshots, timelines). Available at refugeecanada.net/cybertorture, refugeecanada.net/guide, refugeecanada.net/zersetzung, and various places throughout this website.
[202] Visual evidence dated 11–13 May 2023 showing YouTube content (text and visuals) perfectly aligning with same-day NS Health record. Contemporaneous within minutes of the event (refugeecanada.net/cybertorture).
[203] Audio recording of medical resident interview (13 May 2023) and documentation from Halifax Regional Police and Emergency Health Services concerning the same incident (refugeecanada.net/cybertorture; refugeecanada.net/hrp).
[204] Halifax Regional Police reports documenting false statements compared against recording and factual timeline (refugeecanada.net/hrp).
[205] Longitudinal documentation of 2021–2025 temporal convergences between sealed / private milestones and online content from AI-assisted cohort (refugeecanada.net/guide; refugeecanada.net/zersetzung).
[206] Halifax Regional Police and RCMP correspondence declining investigation despite extensive submissions; complaint files and summaries at refugeecanada.net/hrp.
[207] Civilian Review and Complaints Commission for the RCMP decision letters and related correspondence dismissing complaints (refugeecanada.net/crcc).
[208] BCSC S-220956 procedural record: discovery order (1 Apr 2022, Master Cameron), protective order (27 June 2022), referral order (12 Aug 2022), dismissal (4 Oct 2022, Justice Majawa).
[209] Dempsey v. “CAGE”, BCCA: leave to appeal denial (2023) citing subject matter could cause “social unrest.”
[210] Supreme Court of Canada, Application for Leave (June 2023) and related stay motions; registry chronology and dismissal (21 Dec 2023).
[211] Nova Scotia Supreme Court: stay motion decision; contempt decisions; custody conditions and medical impact (refugeecanada.net/litigation / refugeecanada.net/jailed / refugeecanada.net/jailed2).
[212] Nova Scotia Court of Appeal: security for costs orders; sealing order; appeal dismissals and cost awards (refugeecanada.net/enforcement / refugeecanada.net/gatekeeping).
[213] BCSC S-229680 registry correspondence (Jan–Mar 2023) documenting refusal to enforce CPA ss. 2, 7, 13; BCSC PD-5 paras. 1, 3–7; CRT Act s. 16; SCCR 22-1(2), 22-1(5).
[214] Platt, M. G., & Freyd, J. J. (2015). “Betray My Trust, Shame on Me: Shame, Dissociation, Fear, and Betrayal Trauma.” Psychological Trauma: Theory, Research, Practice, and Policy, 7(4), 398–404.
[215] Cost summaries and certificates: $376,201.97 total costs (S-220956 and S-229680), 737.7 billed hours vs. ~867 minutes of court time (refugeecanada.net/litigation; refugeecanada.net/civil / refugeecanada.net/felony2).
[216] MacKinnon, E. Affidavits sworn 17 Oct 2023 in BCSC S-220956 and S-229680 (para. 10 asserting costs “were incurred by the CAGE CEO” and “were necessary to conduct the proceeding”; refugeecanada.net/felony2).
[217] BCSC cost-assessment hearing, 16 Nov 2023 (Master Scarth): transcript and certificate granting 100% of claimed costs despite overlapping counsel and magnitude (refugeecanada.net/felony2).
[218] NSSC enforcement hearing (Mar 2024), transcript noting “you cannot refer to the affidavit” because of sealing, refusing to consider billing-disparity evidence - Affidavit v. Clerks Notes.
[219] Visual analysis of AI-generated thumbnails and imagery matching sealed exhibits and non-public biographical details (refugeecanada.net/cybertorture; refugeecanada.net/guide).
[220] Cross-platform coordination record (YouTube, TikTok, Instagram, Facebook) showing synchronized themes and timing over 2021–2025 (refugeecanada.net/guide).
[221] Closed-loop adaptation examples where content escalated after complaints or defensive steps and persisted despite blocking and platform changes (refugeecanada.net/zersetzung).
[222] R. v. Villaroman, 2016 SCC 33, [2016] 1 S.C.R. 1000 (esp. paras. 35–37 on circumstantial-evidence inference).
[223] Public and court-file materials documenting CAGE’s federal sponsorship and defence-sector contracts (supporting inference of heightened state interest).
[224] Law Society of BC and court-file materials confirming MacKinnon’s CAF legal-officer status and associated security-cleared role.
[225] NATO StratCom COE (2021). Cognitive Warfare; Canadian Armed Forces doctrinal materials on information operations and PSYOP (obtained via ATI, cited generally).
[226] BCSC S-229680 scheduling correspondence (Jan 2023) from Attorney General of Canada counsel steering matter to specific chambers justice outside PD-5 case-management framework; emails evidencing coordination with CAGE counsel.
[227] Society of Composers, Authors and Music Publishers of Canada v. Canadian Assn. of Internet Providers, 2004 SCC 45 – authorities on real-and-substantial-connection and effects-based jurisdiction, applied by analogy to multi-provincial institutional coordination (and Libman, Supra).
[228] Tardelli, S., et al. (2022). “Multifaceted online coordinated behaviour in the 2020 US elections.” EPJ Data Science 11(1): 41 (demonstrating large-scale detection of coordinated behaviour using temporal and content-based signals).
[229] Magelinski, T., Ng, L.H.X., & Nwala, A. (2021). “Detecting Coordinated Online Behavior: A Synchronized Action Framework.” Preprint / technical report (formalizing synchronized actions as a basis for coordination detection).
[230] Minici, M., Cinus, F., Luceri, L., & Ferrara, E. (2024/2025). “Uncovering Coordinated Cross-Platform Information Operations Threatening the Integrity of the 2024 US Presidential Election Online Discussion.” (Cross-platform detection of coordinated inauthentic activity spanning multiple social media platforms).
[231] Cinus, F., et al. (2025). “Exposing Cross-Platform Coordinated Inauthentic Activity in the Run-Up to the 2024 U.S. Election.” In Proceedings of the Web Conference (WWW ’25) (documenting coordinated operations linking several platforms and providing a general framework for cross-platform CoIA analysis).
Additional Supporting References
For complete context, the following additional sources provide comprehensive background on the topics addressed:
Platform Architecture and Algorithms:
-
Adomavicius, G., & Tuzhilin, A. (2005). "Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions." IEEE Transactions on Knowledge and Data Engineering, 17(6), 734-749.
-
Koren, Y., Bell, R., & Volinsky, C. (2009). "Matrix Factorization Techniques for Recommender Systems." Computer, 42(8), 30-37.
-
Ekstrand, M.D., Riedl, J.T., & Konstan, J.A. (2011). "Collaborative Filtering Recommender Systems." Foundations and Trends in Human-Computer Interaction, 4(2), 81-173.
Organized Cybercrime:
-
Lusthaus, J. (2018). Industry of Anonymity: Inside the Business of Cybercrime. Harvard University Press.
-
Holt, T.J., & Lampke, E. (2010). "Exploring stolen data markets online: products and market forces." Criminal Justice Studies, 23(1), 33-50.
-
Leukfeldt, E.R., Lavorgna, A., & Kleemans, E.R. (2017). "Cybercriminal networks, social ties and online forums: Social ties versus digital ties within phishing and malware networks." British Journal of Criminology, 57(3), 704-722.
Psychological Operations and Information Warfare:
-
U.S. Department of Defense (2010). "Joint Publication 3-13.2: Psychological Operations." Joint Chiefs of Staff.
-
Jowett, G.S., & O'Donnell, V. (2018). Propaganda & Persuasion (7th ed.). SAGE Publications.
-
Rid, T. (2020). Active Measures: The Secret History of Disinformation and Political Warfare. Farrar, Straus and Giroux.
Trauma Neurobiology:
-
van der Kolk, B.A. (2014). The Body Keeps the Score: Brain, Mind, and Body in the Healing of Trauma. Viking.
-
Bremner, J.D., et al. (1995). "MRI-based measurement of hippocampal volume in patients with combat-related posttraumatic stress disorder." American Journal of Psychiatry, 152, 973-981.
-
Liddell, B.J., et al. (2022). "Torture exposure and the functional brain: investigating disruptions to intrinsic network connectivity using resting state fMRI." Translational Psychiatry, 12, 35.
Surveillance Capitalism:
-
Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Edge of Power. PublicAffairs.
-
Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press.
-
Noble, S.U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press.














CAGE Director

"The Billing Delta" 737.7 hours (the BOW-ing 737)

*CAGE Retainer Fee Claims*
Eight (8) 30-minute hearings w/ minimal prep.
Seven (7) lawyers assigned to overlapping tasks.
The customary tariff is $4,500 ($500 x 8 hearings).

One (1) 20-minute hearing w/ minimal prep.
Third-Party Assurances Were Required.
One hearing = More than Canada's Average Annual Salary.

"Special costs are fees a reasonable client would pay a reasonably competent solicitor to do the work described in the bill."
- Bradshaw Construction Ltd. v. Bank of Nova Scotia (1991), 54 B.C.L.R. (2d) 309 (S.C.), para 44





Organized & Scripted
Some actors reside overseas, while many others are domiciled here in Canada. These groups operate like an online business. They are hired as contractors by governments and big companies. When assigned to a project, they remain focused. As it pertains to this scandal, police have refused services and filed false reports instead of responding as would be reasonable. This suggests a robust interest.







Frameworks Like This Are Not Built for the Sake of One Target. They Can However be Aimed at Anyone.
Convergence Involving Public Agencies
Disproportionate Timing & Response
On May 11th, 2023, I had a small backyard bonfire mishap involving nearby foliage that was resolved in roughly two minutes through a few buckets of water. Two fire trucks showed up at the three-minute mark, with several police officers in tow. The alacrity of the response seemed impossible, and it was widely disproportionate to the scope. I spoke briefly to the officers and then settled-in for the night. Less than two days later on the morning of Saturday, May 13th, a large white van rolled into the driveway, accompanied by a police paddy wagon. Two social workers and three police officers identified as a "mental health crisis team". They interviewed me in the living room while I was still in my pajamas. I was able to glean from the social worker that her concern had ultimately stemmed from pejorative guidance provided by HRP, and not the innocuous fire incident itself. I was then asked to accompany them to the Halifax QEII Health Centre. I peaceably objected to the unfounded violation of my privacy (R. v. Ahmad, 2020 SCC 11 at paragraph 38), before being handcuffed, and placed in the back seat of the van. I waited in the ER lobby with an HRP officer for roughly five hours before being interviewed by a medical resident. After a brief discussion that I recorded, as is show below, the EHS resident was satisfied that HRP's response was disproportionate. The event was telegraphed, as is the case with other event milestones, and was likewise reflected through visuals that appear to be generated through AI.
The visuals below are dated May 11th through May 13th, and share characteristics of the events described. Namely, the campfire incident with a disproportionately fast three-minute response time by both Police and Fire units (a response standard likely impossible under normal conditions, and very disproportionate to address the need), the use of wilted flowers to start the campfire, a white pick-up van, and an ensconced camera-equipped room where I was interviewed by resident Eastman. The visual shown by "Cosmic Wifey" on May 13th, 2023 depicting flowers, a white pick-up van, reference to the pick-up, and the suggestion of a special designation might be AI-generated, and is a compelling example of the species of scandal I am addressing here. The other actors shown, "Prophetic Record", "Stephanie P. Smith", "Word of God with Lola", and "Jordan's Journey" reference a vigilance standard, the visit, the pick-up, and the interview in the ensconced room. It is possible I may have been connected to the dark web for several years, not unlike a CSIS lab rat. The use of AI and algorithms suggest big tech commercial interests, as it would not be feasible otherwise. State actors, in partnership with the same, have ongoing visibility to my biometric data.
See 4IR White Paper [Here] Concerning Plausible Enabling Technologies.


As above per the embellished HRP report that was provided to EHS.
Full details at the HRP Page (Here).


May 13, 2023 Live Audio Recording Hfx QE II


