California Governor Gavin Newsom has announced that 480 new cameras are being installed “along freeways and in Oakland to make the area safer.”

Describing the cameras as “high tech,” the Democrat said in a video posted on X that this will “help law enforcement identify vehicles linked to crimes using real-time information and alerts.”

What Newsom was talking about are Flock Safety’s cameras that critics say are used to create AI-powered mass surveillance networks that keep a very close eye on movement but also the behavior of people caught in its “sights.”

Flock is in the business of automatic number plate recognition tech, primarily sold to law enforcement, that can be installed even in areas with no electricity by using solar panels. The real-time component stems from Flock using databases of cars “marked” in advance, while the cameras’ connection to phone towers means the police and federal agencies can be alerted to their location instantly.

In addition to raising the issue of mass surveillance as such, and how even nominally democratic governments can swiftly descend into authoritarianism (see: pandemic restrictions), something that is vastly aided by said surveillance, there is also the question of the veracity of the claims of the effectiveness of systems like that currently touted by Governor Newsom.

The big political picture further reveals the fact that crime rates have been soaring in California and that those in power there are obviously under pressure to do something about it; but, observers note, they would also like to either do something or create an illusion of that without earmarking more money for law enforcement.

And here turning to “AI cameras” comes in handy, since their proliferation is for the most part financed by the 2022 Infrastructure Investment and Jobs Act, whose funds amount to $15.6 billion.

The Biden White House decided that up to 10 percent of this can be used by states to buy automated traffic enforcement tools such as cameras.

Flock has sought to assuage privacy, and ultimately, safety concerns among law-abiding citizens by saying that the tech behind the cameras does not use facial recognition – or is activated over “simple traffic violations.”

But, as critics point out, that may not be necessary once this “infrastructure” is in place – since law enforcement has its own facial recognition tools that can use the data harvested by the cameras.

  • @nul@programming.dev
    link
    fedilink
    120 days ago

    This is a really tricky one. I can understand why people in the government would think that this is a good choice, for reasons of “safety”. I remember reading about the In-N-Out in Oakland which shut down recently because of rampant car break-ins in their parking lot. If we could detect that a crime is happening and then alert the nearby patrol as to the perpetrators’ whereabouts and heading, that would make those areas safer and less crime prone.

    But the benefit is only worth the cost if the benefit is actually real. It might stop a car break-in in select locations, but the need for resources is still there. While it’s tempting to lose myself in the anger of a person who is being robbed, I can also see that so many people are committing robbery at the Oakland In-N-Out because so many in that area feel disaffected from society. That they were failed, so they’ve crossed the line of trampling the human rights of another for their own gain. Not saying anything a thief does is justified, just that rampant thievery is only a symptom of a deeper problem.

    The world is heating up. We will have to adapt to growing crops in climate controlled environments as conditions get worse. Homes will continue to be expensive and coveted. Water supply will become ever more critical. All of this entails resource consumption, and none of these things (food, housing, water) are guaranteed as a human right in the US, as well as many places in the world.

    While some nations show strides in progressivity on human rights, many places are far worse off than the US and those in the south will surely create many more refugees when the equator heats up further.

    There are real concerns about how many people are crossing the southern US border each year. But also, everyone in a democratic nation should be granted their basic human rights by definition of what it means to be a democracy.

    We should be protecting people who dine out from having their property stolen while they are eating. But also, enabling the government to track us and all of our actions using AI recognition software may not be a great idea for salvaging a “democracy” currently in the pocket of corporations.

    We should be granting a wide variety of human rights to every person across the globe and are prevented from doing so due to a cabal of ultrarich oligarchs. But also, if we took down those oligarchs and spread their wealth evenly across all people like butter over toast… it might not go as far as you think.

    These are not dichotomies but important facets of multifaceted problems. Centrism is not a good approach, because approaching every problem as equally severe and every side as equally incorrect leaves us in a standstill with the status quo at an advantage. Picking no side is equivalent to picking the side that’s in power. But to change who represents each side, we have to give the time and resources it takes to get involved.

    But who has that kind of time when we’re all struggling to keep hold of our human rights? If income inequality was abolished and all the wealth of the upper class were divided evenly among the world, it might be upsetting to get only a few thousand dollars and be expected to keep working. Income is so unequal because of the results of taking small amounts from millions of workers. We should absolutely demand better pay for all workers, but we should also evaluate what efficiency gains capitalism purposely prevents us from making.

    Even if we took control of all the corporations, we still have to solve the problem of how to run the companies collectively, making decent informed decisions that lead to good results for everyone. Capitalism bought us a cheapened world where every corner is cut as long as it’s still marketable. Where could humanism take us instead? How do we fairly get local gardens into every community, at a scale where everyone can be fed without the need for mass transport? How do we create a platform that connects us all in such a way that we the people can make that kind of wide-sweeping decision rather than requiring a representative government to do it for us?

    These problems aren’t easy to solve and we will need to do it together. Those of us who have enough resources and spare time to start the process need to figure out what a digital forum would look like as a public utility. How feasible is a society where all operation costs and profits are shared among everyone? Where productivity can be measured and those who did the most to make something happen can be fairly rewarded for their effort, without dismantling the overall basic income that provides for our human rights. And if we end up with a surveillance system for the safety of the people, making it in a way that’s open source and alerts the community as well as law enforcement when a person is in need of help.

    Fairness is intangible. Hard to define. Hard to maintain. Often argued about. There’s no agreed-upon standard of what’s fair. And to make a system that determines fairness fairly, you have to train an AI to take varied opinions and select for the most voted upon answers. We have the potential to make a true democracy where every voice is heard. But is that the ultimate goal?

    If we can make it to a world where everyone is fed, they all feel safe, and no one lacks for education or socialization, our democratic society could be beautiful and productive. But if we give the current world this same tool of democracy and propose that we work together to build that ideal world… would those of us who grew up in this world be ready to guide us into that future? Or would our societal flaws just get baked in and become impossible to remove? I try to have faith that there are enough of us who would do the right thing. But that’s just faith. And relying on faith makes me scared. That said, if there’s a time to act, it’s probably now.

    • @brygphilomena@lemmy.world
      link
      fedilink
      219 days ago

      What pisses me off on this, is it does nothing to prevent crime. Your car will still be broken into and shit stolen, only now they can track the car.

      But who gets to decide what is tracked, how it is tracked. Who gets that information, and who is overseeing it to prevent or punish abuses? What future crimes will this be used for to track citizens. Will people be tracked but not arrested/prosecuted? Does the tracking expire after a period of time or does someone need to turn it off?

      One right that isn’t enumerated in law, but should be, is the right to be lost. We should be able to disappear from society and go off grid if we decide. Whether because we did “crime” or just because we want to.

      • @nul@programming.dev
        link
        fedilink
        119 days ago

        Agreed, it is super creepy to have the government know where you are at all times, even if the government weren’t corrupt. Most of us are tracked anyway because our phones are not private, but this is just another step down that slippery slope.

        Another aspect of this is how inherently racist AI surveillance systems are. Facial recognition is trained on mostly white photos, so it’s great at telling white people apart. But due to a combination of lacking training data and difficulty picking up contrast with darker skin tones, it is far more likely for people of color to be falsely identified by AI software.

        I used to work for a company that did facial recognition for various clients including the government. My friends and I all quit in protest when our employer started work on something they called a “deconfliction queue”. The idea was that faces from crime scenes which could not be identified would enter a queue where they would be saved for future comparison. Then, if the same face shows up as part of another crime scene, they would be matched and raised in priority. This is all well and good if it actually is the same person committing these crimes. In practice, we suspected that having a queue like this would lead to false positives where people (particularly of color) would be accused of committing multiple crimes just because they look similar to someone else. Or in the worst case scenario, an innocent person could register for a photo ID and suddenly get a no-knock warrant in the middle of the night because the deconfliction queue thinks their driver’s license looks a lot like a serial criminal.

        Real Minority Report stuff we’re heading into, this shit’s probably going to get a lot more fucked before it gets better.