According to Reuters, all the major US banks (JP Morgan, Chase, City National Bank of Florida, Wells Fargo) are either tying out or have tried out artificial intelligence systems that include facial recognition. The news agency reported this last April 19th.
According to the agency, City National will start facial recognition trials to identify both its customers at ATMs as well as employees in stores. JP Morgan is going to run trials of its own, at a small scale, in the state of Ohio involving “video analytics technology”. And they are not alone.
Wells Fargo is matching footage of crimes to known offenders, according to a former employee of the software company 3VR. The bank is using about a decade’s worth of videos to develop its database. The institution itself has issued no comments about its fraud prevention program so far.
Last but not least, Bank of America representatives met several times with AnyVision, an AI company, whose technology is already in use at BP and a hospital in LA. The bank wants to reduce loitering at ATMs, and they’ve been working hard at that for about a decade.
FTC guidelines are laid down at last.
Artificial Intelligence has caught the public’s imagination over the last few years, especially after the late Stephen Hawking railed against it publicly. A lot of fear and disinformation has been floating around the air since then. That’s one of the reasons why the regulatory agencies on both sides of the Atlantic have been spending a bit of thought on the subject and are proposing some much-needed guidelines and regulations for the ethical use of artificial intelligence in practical technologies.
FTC’s guidelines are out there now, available on its website. It encourages companies to look for @truth, fairness, and equity” in the way they apply AI to solve their problems. Among other tips the agency recommends compiling diverse data sets, ensuring the technology’s transparency as to allow for scrutinization, avoiding sweeping claims about fairness, and other interesting things.
One of the agency’s main key instructions is a tiny bit worrisome. A whole paragraph goes into this, as the institution advises to “do more good than harm” which is not exactly the kind of thing that can bring peace of mind to the general public.
The European Union has been paying a lot more attention to privacy when it comes to the digital world, and facial recognition is not an exception. According to The Financial Times, EU regulators are pushing forward a set of strict new regulations on facial regulation. Its use is restricted to “a small number of public-interest scenarios.” Practices such as real-time tracking will need the state’s judicial approval.
Preventing the perpetuation of biases against minority groups is a priority in the EU regulations as they include a big fine (6% of the company’s global turnover) if facial recognition technology is used in such a way.
In the UK there is already a legal precedent against facial recognition. In August of last year, a court of law ruled that the automatic facial recognition in use by the South Wales Police is unlawful. This happened after a man was identified in 2017 and 2018 in Cardiff, in a peaceful protest, without his consent. The prevailing view was that his human rights were at stake.
The market moves forward, nevertheless.
So there is plenty of institutional pushback against facial recognition technology. Even so, the market keeps growing and it’s expected to double in size by 2025. Some experts estimate the market’s capitalization by then to be as high as nine billion USD.
There is no way to imagine how the technology will look by then. It will surely be more developed, complex, effective, it could even become reasonably cheap.
Artificial Intelligence experts (even those who favor its use) keep warning us all that all these technologies are moving so much faster than most of us realize, so it’s too idealistic to expect the relevant institutions to keep up with the trends to keep us safe. This is why these new regulations, even if they remain seminal attempts so far, are so important.
(SecurityAffairs – hacking, PLA Unit 61419)
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.