ON THE DANGERS OF AUTOCOMPLETE IN TRAINING AND ONBOARDING SOLUTIONS

 On March 18, 2018, dreams of “a brave new world” in the autonomous vehicle industry came to a screeching halt. On that day, 49-year-old Elaine Herzberg had the misfortune of becoming the first recorded fatality of an accident involving an autonomous vehicle. One of Uber’s experimental cars, a Volvo XC90, sensed the pedestrian crossing the street but decided against taking evasive action.

Merely a few days later, on March 23, another autonomous car-related death occurred, when 38-year-old Wei Huang’s Tesla X SUV slammed into a concrete lane divider on a highway and took flame. His car was simply unable to analyse the road, flashed warnings and when the man failed to react and take the wheel, rolled straight into the concrete obstacle.

 

Both tragic incidents are sufficiently problematic in and of themselves, but they also bring to light some tough and difficult issues.

 

One of the main questions is: who is responsible?

 

Who should be held accountable when autonomous or partially autonomous systems fail? In both cases, the developers behind the technology rightly observed that their systems were not meant to replace human senses and judgement (nor did they claim that they do). In short, the responsibility for the fatal consequences of the technology was rolled onto the users. On the other hand, it is quite understandable why such answers are deemed insufficient in so far as the companies’ responsibility is concerned. After all, was not the entire purpose of the technology to make driving safer and easier for drivers? It is hard to predict the extent to which these incidents have affected the reception of the technology and its future.

 

While autocomplete does not endanger lives (yet), Proverbs 18:21 does mention that “Death and life are in the power of the tongue”, so let’s look at what impact “autonomous writing” may have.

 

THE INITIAL IMPLICATIONS OF AUTOCOMPLETE

To be sure, autocomplete and other AI related features do not normally determine life-and-death situations. Even so, much like the automated vehicle market, they do give rise to some very important questions and moral dilemmas. The first thing to take into consideration is that in order to automatically “fill” content fields, such systems have to collect and store the data they are fed and make it available for use at a later date. The problem is not so dramatic when search terms are involved, and much greater when private data of individual users is concerned.

Where do you store the data, then? Do you leave it to the developers of the feature, and allow the information to migrate away from the organization? Do you store it on the organization’s product or website? Where would it be safest? And just as important – who is legally responsible when and if something goes wrong? Is the risk worth the advantages of automation?

 

WHAT COULD POSSIBLY GO WRONG?

How big are the risks involving the collection of personal data? Big enough to cause various authorities to come up with safety standards such as HIPAA, SOX and GDPR. Big enough that when GDPR was released, the entire SAAS industry had to evolve to meet the new standard and went into a world-wide frenzy to avoid the potential legal repercussions. But the question of data security is just the one of many other aspects. Here are a few additional factors to take into consideration:

  • Programs need periodical updates in order to maintain their usability, reliability and security.
  • Reliability issues result in users’ frustration, possible loss of customers and even damage to an organization’s’ reputation.
  • No system is completely fail-safe. What if the system loses important data?
  • Who is responsible when customers, either accidentally or maliciously abuse the system?

If you’re still not clear about this issue, ask yourself this: If your bank portal uses autocomplete for money transfer activities would you:

  • Feel safe about where your data is held?
  • Trust the transaction to be infallible?

Going in to an autonomous car, if you knew that you (or others) might not make it out safe and sound, would you still want to be taken for a ride?