California Governor Gavin Newsom has announced that 480 new cameras are being installed “along freeways and in Oakland to make the area safer.”

Describing the cameras as “high tech,” the Democrat said in a video posted on X that this will “help law enforcement identify vehicles linked to crimes using real-time information and alerts.”

What Newsom was talking about are Flock Safety’s cameras that critics say are used to create AI-powered mass surveillance networks that keep a very close eye on movement but also the behavior of people caught in its “sights.”

Flock is in the business of automatic number plate recognition tech, primarily sold to law enforcement, that can be installed even in areas with no electricity by using solar panels. The real-time component stems from Flock using databases of cars “marked” in advance, while the cameras’ connection to phone towers means the police and federal agencies can be alerted to their location instantly.

In addition to raising the issue of mass surveillance as such, and how even nominally democratic governments can swiftly descend into authoritarianism (see: pandemic restrictions), something that is vastly aided by said surveillance, there is also the question of the veracity of the claims of the effectiveness of systems like that currently touted by Governor Newsom.

The big political picture further reveals the fact that crime rates have been soaring in California and that those in power there are obviously under pressure to do something about it; but, observers note, they would also like to either do something or create an illusion of that without earmarking more money for law enforcement.

And here turning to “AI cameras” comes in handy, since their proliferation is for the most part financed by the 2022 Infrastructure Investment and Jobs Act, whose funds amount to $15.6 billion.

The Biden White House decided that up to 10 percent of this can be used by states to buy automated traffic enforcement tools such as cameras.

Flock has sought to assuage privacy, and ultimately, safety concerns among law-abiding citizens by saying that the tech behind the cameras does not use facial recognition – or is activated over “simple traffic violations.”

But, as critics point out, that may not be necessary once this “infrastructure” is in place – since law enforcement has its own facial recognition tools that can use the data harvested by the cameras.

  • @brygphilomena@lemmy.world
    link
    fedilink
    21 month ago

    What pisses me off on this, is it does nothing to prevent crime. Your car will still be broken into and shit stolen, only now they can track the car.

    But who gets to decide what is tracked, how it is tracked. Who gets that information, and who is overseeing it to prevent or punish abuses? What future crimes will this be used for to track citizens. Will people be tracked but not arrested/prosecuted? Does the tracking expire after a period of time or does someone need to turn it off?

    One right that isn’t enumerated in law, but should be, is the right to be lost. We should be able to disappear from society and go off grid if we decide. Whether because we did “crime” or just because we want to.

    • @nul@programming.dev
      link
      fedilink
      11 month ago

      Agreed, it is super creepy to have the government know where you are at all times, even if the government weren’t corrupt. Most of us are tracked anyway because our phones are not private, but this is just another step down that slippery slope.

      Another aspect of this is how inherently racist AI surveillance systems are. Facial recognition is trained on mostly white photos, so it’s great at telling white people apart. But due to a combination of lacking training data and difficulty picking up contrast with darker skin tones, it is far more likely for people of color to be falsely identified by AI software.

      I used to work for a company that did facial recognition for various clients including the government. My friends and I all quit in protest when our employer started work on something they called a “deconfliction queue”. The idea was that faces from crime scenes which could not be identified would enter a queue where they would be saved for future comparison. Then, if the same face shows up as part of another crime scene, they would be matched and raised in priority. This is all well and good if it actually is the same person committing these crimes. In practice, we suspected that having a queue like this would lead to false positives where people (particularly of color) would be accused of committing multiple crimes just because they look similar to someone else. Or in the worst case scenario, an innocent person could register for a photo ID and suddenly get a no-knock warrant in the middle of the night because the deconfliction queue thinks their driver’s license looks a lot like a serial criminal.

      Real Minority Report stuff we’re heading into, this shit’s probably going to get a lot more fucked before it gets better.