AI looks for people who jump turnstiles to stop people from not paying their fare, but experts say it could have harmful effects.
Experts say that states and private companies can use monitoring technology based on artificial intelligence (AI) to stop crime, but it comes at a price.
For example, the Metropolitan Transportation Authority in New York City recently revealed that the city has put AI monitoring tools in seven train stops, but not to catch people who jump turnstiles. Instead, the goal is to determine how much money the city loses because people don’t pay their fares.
“The MTA uses this tool to figure out how many people don’t pay their fares without naming them,” said Joana Flores, a spokeswoman for the MTA, in a statement to Fox News Digital.
In May, the public transportation authority put out a study saying that last year, people who didn’t pay their fares cost about $690 million. The AI fare evasion tracker software will help the MTA keep track of how much money it’s losing and help the city’s government devise ways to stop fare evaders by tracking how they are getting through without paying.
The study says that if people are jumping turnstiles, changing them with doors may help stop people from not paying their fare.
The study says that “AI technology could help reduce fare evasion” and that “measurements of fare evasion can help plan effective and fair interventions.”
According to government papers from NBC, the MTA has teamed up with AWAAIT, an AI software company based in Spain. According to its website, AWAAIT has a tool called DETECTOR that helps find people who don’t pay their fares in real time. Local news says AWAAIT is being used in three places, including New York and Barcelona.
Even though the MTA said it doesn’t use AI to find people who don’t pay their fares, AWAAIT’s website says its “system alerts in real-time, sending screenshots of the fare violation to the app on the smartphones of ticket inspectors.”
When asked what he thought about the third city and privacy worries about AI monitoring tools, AWAAIT co-founder and CEO Xavier Arrufat said nothing.
David Ly, the founder of Iveda, an AI video platform and “smart city” technology company, told Fox News Digital that AI would improve the cameras in train stops.
“It could be used nearly everywhere. On building sites, we make sure that hard hats are worn correctly. “When people walk on the job site, we ensure they wear safety nets,” he said. “So,” he said, “technology can be taught to do almost anything, whether to keep people safe, honest, etc.”
But if AI is misused, people can doubt it.
Albert Fox Cahn, founder and executive head of the Surveillance Technology Oversight Project (STOP), told the media that he thinks it is “alarming” that the MTA would hire a “foreign company to track riders without their consent.”
Cahn said, “There are a lot of big questions about how they get it, how they use it, and what the long-term benefit is.” “We don’t have to pay for AI to know that people are jumping turnstiles. And… the MTA’s answer has never made it clear what the point is.”
AI can find, flag, and fine people for even the most minor crimes if misused.
“If you can use it to avoid paying a fare, you can use it to walk on the sidewalk.” Cahn said, “You can use it in almost every other part of your life.” “When they wrote the Fourth Amendment, the people who made the Constitution did not have this in mind.”
Ly, on the other hand, says that AI monitoring can be a very effective way to fight crime and stop it from happening, but it can only go as far as the people who watch over and run the technology will let it go. In other words, a camera might be able to spot a crime suspect, but that person won’t be caught until a human law enforcement worker acts.
“Most of the time, we give technology more credit because it can do many other things. “It can, but we don’t have enough time, energy, or resources to do that,” Ly said. “…It doesn’t matter.”
He also said there are already cams in schools, gas stores, Walmart parking lots, and other places. But without “intelligence,” he said, cameras are just “paperweights” that record video that might not be useful until a person or AI notices something out of the ordinary, like a person carrying a gun or a car crash.
AI, like people, can pick up on normal human behavior and how it compares to things that aren’t normal. This is how it can spot possible danger.
He also said that people tend to become anxious about how AI monitoring technology could affect their privacy or alert police to tiny crimes.
“When people have a lot of free time, they start to think too much. We think too much about things, get anxious, and start to think about movies… We’re all like, “Oh my God.” But if the police don’t do anything about it, it doesn’t matter,” Ly said.
Cahn said that the MTA spends more money trying to stop people from not paying their fares than those people cost the city each year. He also noted that New York City is already one of the most watched places in the world. With about 42,000 security cameras, it is more like Shanghai than Stockholm. The person who fought for privacy called support for spying to fight crime “Orwellian.”
“I won’t accept that the price of safety is a place where no one can walk out the front door without being watched. “That doesn’t sound like a democracy to me,” he said.