Starting from November 1, the iBorderCtrl system will be launched
New trials for an EU-funded scheme for Artifical Intelligence lie-detector systems which aim to scan travelers who government flag as suspicious.
Starting from November 1, the iBorderCtrl system will be launched at four border crossing points in Hungary, Latvia, and Greece along with countries outside the EU.
The new technology aims to make border crossings for travelers much quicker while weeding out criminals or illegal crossings.
According to RT: Developed with €5 million in EU funding from partners across Europe, the pilot project will be operated by border agents in each of the trial countries and led by the Hungarian National Police.
Those using the system will first have to upload certain documents like passports, along with an online application form, before being assessed by the virtual, retina-scanning border agent.
The traveler will merely stare into a camera and answer the questions one would expect a diligent human border agent to ask, according to New Scientist.
“What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?”
But unlike a human border guard, the AI system is analyzing minute micro-gestures in the traveler’s facial expression, searching for any signs that they might be telling a lie.
If satisfied with the crosser’s honest intentions, the iBorderCtrl will reward them with a QR code that allows them safe passage into the EU.
Unsatisfied, however, and travelers will have to go through additional biometric screening such as having fingerprints were taken, facial matching, or palm vein reading. A human agent then makes a final assessment.
Like all AI technologies in their infancy, the system is still highly experimental, and with a current success rate of 76 percent, it won’t be preventing anyone from crossing the border during its six-month trial.
But developers of the system are “quite confident” that accuracy can be boosted to 85 percent with the fresh data.
However, the greater concern comes from civil liberties groups who have previously warned about the gross inaccuracies found in systems based on machine learning, especially ones that use facial recognition software.
In July, the head of London’s Metropolitan Police stood by trials of automated facial recognition (AFR) technology in parts of the city, despite reports that the AFR system had a 98 percent false positive rate, resulting in only two accurate matches.
The system had been labeled an “Orwellian surveillance tool,” by civil liberties group, Big Brother Watch.