Crimes Against Humanity and War Crimes Act (S.C. 2000, c. 24) [Link]

Machine Learning Analysis - Part I
Synthesis by ChatGPT (Open AI), Perplexity.aI, and Claude.ai (Anthropic), sourced from the evidence.
July 29th, 2025




All Three Machine Learning Models Reflected the Evidence in the Affidavits, and Correctly Identified the Issues at Stake.

As Above.
A number of issues germane to the scandal have been identified through independent legal analysis conducted using artificial intelligence platforms, namely ChatGPT (OpenAI), Perplexity.AI, and Claude.ai (Anthropic). The analysis has revealed procedural, adjudicative, and ethical anomalies that warrant institutional review. While nonetheless subject to programming algorithms and training, AI platforms are not subject to political pressure, reputational bias, or career constraints. They assess inputs and identify inconsistencies based solely on training and logic.
While AI does not replace judicial authority, it can serve as an integrity-checking mechanism where systems appear compromised by bias, capture, or systemic failure. In this case, it has done exactly that. Part 1 features (1) The BC Billing Scandal; (2) Physical Health & Incarceration (contempt for opposing the billing scandal); (3) The Shareholder Scandal; and (4) Unconstitutional Sealing Orders. Part 2, on a different page, continues with the (5) PsyOp components, and (6) general statements concerning the compromised and illegitimate civil proceedings.
ChatGPT (OpenAI) has over 180 million monthly users and projects $11 billion in annual revenue. It is already a standard tool in legal, academic, and government workflows—including research, case analysis, and policy synthesis.
Thomson Reuters and similar institutions recognize these models as legitimate components of modern legal practice. In one peer-reviewed study, ChatGPT achieved 100% accuracy in classifying legal evidence and determining criminal liability under complex statutory conditions across 268 separate cases. A few articles are linked below, gleaned from an exhaustive library.
-
ChatGPT usage statistics (here)
-
Thomson Reuters white paper on AI in legal practice (here)
-
Peer-reviewed academic studies on legal AI reliability (here)
​
Are the AI Results Hearsay?
In familiar terms, the findings of the AI platforms align exactly with the record materials in the court file. This can be easily cross-examined by those who have access to those records (and you can always ask...), and also, through comparing the findings to other pages on this website, such as the retainer fee billing scandal page (here), the sealing order page (here), among other topics in the blog. In that sense, the machine learning models simply replicated the file. Because three courts failed to do this beyond April 1st, 2022 (see Civil page), it underscores the gravity of the scandal.
Case law concerning hearsay is worth mentioning. The Rules of Evidence differ between the United States and Canada. In the US, machine-generated statements, like AI outputs, survive hearsay challenges because the machine generates the data and not a person. As shown;
United States v. Washington, 498 F.3d 225 (4th Cir. 2007);
"Contrary to the dissent's assertion, which makes no distinction between a chromatograph machine and a typewriter or telephone, the chromatograph machine's output is a mechanical response to the item analyzed and in no way is a communication of the operator. While a typewriter or telephone transmits the communicative assertion of the operator, the chromalograph machine transmits data it derives from the sample being analyzed, independent of what the operator would say about the sample, if he or she had anything to say about it. [...] The dissenting opinion universally mixes authentication issues with its argument about "statements" from the machine that the blood contains PCP and alcohol. Obviously, if the defendant wished to question the manner in which the technicians set up the machines, he would be entitled to subpoena into court and cross-examine the technicians. But once the machine was properly calibrated and the blood properly inserted, it was the machine, not the technicians, which concluded that the blood contained PCP and alcohol. The technicians never make that determination and accordingly could not be cross-examined on the veracity of that "statement."
United States v. Channon, 881 F.3d 806 (10th Cir. 2018);
"Under Federal Rule of Evidence 801, hearsay is defined as an oral or written assertion by a declarant offered to prove the truth of the matter asserted. 'Declarant' means the person who made the statement." Fed. R. Evid. 801(b) (emphasis added). Here, the Excel spreadsheets contained machine-generated transaction records. The data was created at the point of sale, transferred to OfficeMax servers, and then passed to the third-party database maintained by SHC. In other words, these records were produced by machines. They therefore fall outside the purview of Rule 801, as the declarant is not a person. United States v. Hamilton, 413 F.3d 1138, 1142 (10th Cir. 2005).7"
In Canada, the hearsay rule — which generally excludes out-of-court statements tendered for the truth of their contents — is governed by common law in criminal and civil contexts, subject to specific exceptions. The following case law provides guidance;
R. v. Starr, 2000 SCC 40 (CanLII), [2000] 2 SCR 144 at Paragraph 31;
"We have recognized these changes, commenting in R. v. Levogiannis, 1993 CanLII 47 (SCC), [1993] 4 S.C.R. 475, at p. 487, that “the recent trend in courts has been to remove barriers to the truth-seeking process”. This Court has taken a flexible approach to the rules of evidence, “reflect[ing] a keen sensibility to the need to receive evidence which has real probative force in the absence of overriding countervailing considerations”: R. v. Seaboyer, 1991 CanLII 76 (SCC), [1991] 2 S.C.R. 577, at p. 623. In the specific context of hearsay evidence, Lamer C.J. speaking for a unanimous Court in Smith, supra, at p. 932, explained that “[t]he movement towards a flexible approach was motivated by the realization that, as a general rule, reliable evidence ought not to be excluded simply because it cannot be tested by cross-examination.” Our motivation in reforming the rules of evidence has been “a genuine attempt to bring the relevant and probative evidence before the trier of fact in order to foster the search for truth”: Levogiannis, supra, at p. 487. These principles must guide all of our evidentiary reform endeavours."
R. v. Khelawon, [2006] 2 S.C.R. 787, 2006 SCC 57 at paragraph 49;
"The broader spectrum of interests encompassed in trial fairness is reflected in the twin principles of necessity and reliability. The criterion of necessity is founded on society’s interest in getting at the truth. Because it is not always possible to meet the optimal test of contemporaneous cross-examination, rather than simply losing the value of the evidence, it becomes necessary in the interests of justice to consider whether it should nonetheless be admitted in its hearsay form. The criterion of reliability is about ensuring the integrity of the trial process."
Again at R v. Starr at paragraph 215;
"Threshold reliability is concerned not with whether the statement is true or not; that is a question of ultimate reliability. Instead, it is concerned with whether or not the circumstances surrounding the statement itself provide circumstantial guarantees of trustworthiness. This could be because the declarant had no motive to lie (see Khan, supra; Smith, supra), or because there were safeguards in place such that a lie could be discovered (see Hawkins, supra; U. (F.J.), supra; B. (K.G.), supra)."
In all of these cases, the machine-learning models used to corroborate numerous factors in the scandal satisfy the above criteria, both in US and Canadian law. Focusing on Canadian case law in particular, the AI outputs clearly demonstrate what the courts were unwilling to do beyond April 1st, 2022 in declaring the substance of the evidentiary record, thus satisfying the public interest in getting at the truth.
With respect to reliability, an AI model cannot lie. It's programmer may decide to manipulate an AI's algorithms to skew certain responses (whereas the onus would fall on the programmer), but even in an event where there might be tampering, the use case here is very simple, repeatable, and verifiable by way of cross-examination with the filed court record. As is shown in the visuals below, the AI models were simply asked to regurgitate the contents in the file and provide summaries in response to a series of questions. Any auditor can do the same.
With ChatGPT's projected $11 billion in annual recurring revenue, a threshold of reliability is satisfied in that its success is tied to its function. Likewise pursuant to the tests in Khelawon and Starr, it is easy to cross-examine the evidence itself, or run the same tests with the same materials, which should reasonably be expected to elicit the same results. A hearsay challenge utterly collapses under these conditions.
Below, and in no specific order, I present the AI analysis concerning a series of topics germane to the scandal.
Retainer Fee Billing Scandal
Independently Corroborated by AI.






















All Three AI Models Referenced the Billing Scandal and Other Aspects. It Requires a Cause.




























Here, ChatGPT Proposes the Likelihood of a 4IR Component A Priori.







Health & Incarceration
These Emulations of a Doctor and Judge Concern Health ONLY.















Shareholder Records
All Three AI Models Identified the "Smoking Gun" Within Seconds.














Sealing Orders
All Three AI Models Declared the Sealing Orders Unconstitutional.
























Claude.AI Named Actors in this Report (Redacted).


