Government flags with score: Guilty until proven innocent

Share this article:

By John W. Whitehead & Nisha Whitehead

You’ve been flagged as a threat.

Before long, every household in the United States will be similarly flagged and assigned a threat score.

Without ever knowingly committed a crime, or been convicted of one, you and your fellow citizens likely have been assessed for behaviors the government might consider devious, dangerous or concerning; assigned a threat score based on your associations, activities and viewpoints; and catalogued in a government database according to how you should be approached by police and other government agencies based on your particular threat level.

If you’re not unnerved over the ramifications of how such a program could be used and abused, keep reading.

It’s just a matter of time before you find yourself wrongly accused, investigated and confronted by police based on a data-driven algorithm or risk assessment culled by a computer program run by artificial intelligence.

Consider the case of Michael Williams, who spent almost a year in jail for a crime he didn’t commit. Williams was behind the wheel when a passing car fired at his vehicle, killing his 25-year-old passenger Safarian Herring, who had hitched a ride.

Despite the fact that there were no eyewitnesses to the shooting and no gun was found in the car, police charged the 65-year-old man with first-degree murder based on ShotSpotter, a gunshot detection program that had picked up a loud bang on its network of surveillance microphones and triangulated the noise to correspond with a noiseless security video showing Williams’ car driving through an intersection. The case was eventually dismissed for lack of evidence.

Although gunshot detection program such as ShotSpotter are gaining popularity with law enforcement agencies, prosecutors and courts alike, they are riddled with flaws, mistaking dumpsters, trucks, motorcycles, helicopters, fireworks, construction, trash pickup and church bells…for gunshots.

As an Associated Press investigation found, “the system can miss live gunfire right under its microphones, or misclassify the sounds of fireworks or cars backfiring as gunshots.”

In one community, ShotSpotter worked less than 50% of the time.

The same company that owns ShotSpotter owns a predictive policing program that aims to use gunshot detection data to predict crime before it happens. U.S. presidents, Joe Biden and Donald Trump, have pushed for greater use of these predictive programs to combat gun violence in communities, despite the fact that found they have not reduced gun violence or increase, community safety.

The rationale behind this fusion of widespread surveillance, behavior prediction technologies, data mining, precognitive technology, and neighborhood and family snitch programs is purportedly to enable the government takes preemptive steps to combat crime, or whatever the government has chosen to outlaw at any given time.

It is precrime, straight out of the realm of dystopian science fiction movies such as Minority Report, which aims to prevent crimes before they happen, but in fact, it’s just another means of getting the citizenry in the government’s crosshairs in order to lock down the Nation.

Even Social Services is getting in on the action, with computer algorithms attempting to predict which households might be guilty of child abuse and neglect.

All it takes is an AI both flagging a household for potential neglect for a family to be investigated, found guilty, and the children placed in foster care.

Mind you, potential neglect can include everything from inadequate housing to poor hygiene.

According to an investigative report by the Associated Press, once incidents of potential neglect are reported to a child protection hotline, the reports are run through a screening process that pulls together personal data collected from birth, Medicaid, substance abuse, mental health, jail and probation records, among other government data sets. The algorithm then calculates the child’s potential risk and assigns a score of one to 20 to predict the risk that a child will be placed in foster care in the two years after they are investigated. The higher the number, the greater the risk. Social workers use their discretion to decide whether to investigate.

This technology is far from infallible. Yet, fallible or not, AI predictive screening program is being used widely across the country by government agencies to surveil and target families for investigation.

The impact of these kinds of AI predictive tools is being felt in almost every area of life.

Under the pretext of helping overwhelmed government agencies work more efficiently, AI predictive and surveillance technologies are being used to classify, segregate, and flag the populace with little concern for privacy rights or due process.

All of this sorting, sifting and calculating is being done swiftly, secretly, and incessantly with the help of AI technology and a surveillance state that monitors your every move.

Where this becomes particularly dangerous is when the government takes preemptive steps to combat crime or abuse, or whatever the government has chosen to outlaw at any given time.

It’s the U.S. police state rolled up into one oppressive pre-crime and pre-thought crime package, and the end result is the death of due process.

With the advent of government-funded AI predictive policing programs that surveil and flag someone as a potential threat to be investigated and treated as dangerous, there can be no assurance of due process: you already have been turned into a suspect.

To disentangle yourself from the fallout of such a threat assessment, the burden of proof rests on you to prove your innocence.

You see the problem?

It used to be that every person had the right to be assumed innocent until proven guilty, and the burden of proof rested with one’s accusers. That assumption of innocence since has been turned on its head by a surveillance state that renders us all suspects and over criminalization which renders us all potentially guilty of some wrongdoing or other.

Combine predictive AI technology with surveillance and over criminalization, then add militarized police crashing through doors in the middle of the night to serve a routine warrant, and you’ll be lucky to escape with your life.

I make clear in my book, “Battlefield America: The War on the American People” and in its fictional counterpart “The Erik Blair Diaries,” if you’re not scared yet, you should be.

—The Rutherford Institute

Leave a Reply