The Morality of Background Check Services
The morality and ethics of a background check are incessantly discussed in the press and on different social channels. Prying into people’s lives is so wrong that we will not be surprised if the question of the background investigation will soon be debated by moral philosophers alongside such moral problems as racial discrimination, criminal punishment, and war.
Philosophers’ interest in the background checking would appear even more pertinent in the light of the latest developments in predictive technology. Indeed, the background investigation has become so sophisticated that our reality already looks similar to the worst totalitarian societies ever imagined in literature. And because artificial intelligence deployed in background investigations continues to advance, we can safely say that the surveillance in our world will soon outreach the gloomiest pictures ever created by writers.
Dissenting voices will say that the background check saves us from interacting with criminals, liars, conmen, and people with antisocial tendencies. There are numerous stories generated in different media about wolves in sheep’s clothing. We constantly hear how some individuals ingratiate themselves with others, hiding their criminal past under the guise of kindness, and then unleash their aggression on unsuspecting victims. Such duplicitous people wreak havoc in the workplace, on the street, and in our private lives. What can minimize our exposure to such people is precisely the investigation into their pasts. All you need to do to protect yourself from the wrong company is to go to the best background check you can get online and submit the details of the person about whom you want to know more.
If the background investigation truly helps us expose people’s immoral and criminal past and, in so doing, shields us from disasters, why so many people oppose this practice? Companies that offer background check services are convinced that they are performing the noble mission of saving people from harm. Why then to question the morality of their actions and compare them to a surveillance state?
The problem is that it is difficult to know where to draw the line when you dig into people’s private lives. If companies providing some background investigation stopped at checking whether a person of interest has committed felony or crime, the procedure would not be so ethically dubious. But they bring in financial data, ranging from notices of bankruptcies to alerts about large purchases, and inform about weapon permits. Speakers criticizing the background investigation practices rightly ask whether our employer, neighbor, or potential suitor should know how much we owe to banks and when we have registered a new rifle.
Even the right to reveal our former felonies is debatable. When we are charged with speed-driving or become arrested for participating in political demonstrations, especially if this happened in our salad days, should we bear the responsibility for our misbehavior now and become turned down by employers, friends, and would-be partners? Demonstrating such an unforgiving attitude towards people’s former mistakes seems counterintuitive and unkind.
And yet, there are numerous stories told by people who became rejected by employers and friends because of unflattering data excavated about them by background check services. Companies Uber and Lyft are well-known for disqualifying their employees for minor offences about which they learn from background check companies whose services they use. Their potential employees get weeded out for a 10-year-old drunken driving conviction, a 20-year-old fake ID possession, or a decade-old, non-drug-related, nonviolent felony record. And Uber and Lyft are not the only proponents of such strictness. According to the recent survey conducted by risk-alert firm Endera, a whopping 98% of businesses perform background checks on job candidates and get rid of those whose record does not come back clean.
As predictive technology becomes more sophisticated, our digging into peoples’ lives gets deeper. There are companies that use advanced artificial intelligence to assess people’s personalities by scanning all their posts on Facebook, Twitter, and Instagram. These services claim that by analyzing words in people’s posts and by scrutinizing their facial expressions on their pictures published on social media channels, they can build their reliable personality profile and predict whether they will engage in antisocial or aggressive activities in the future. Like well-known personality assessments, these services rank people’s traits on the risk rating scale from “very low risk” to “very high risk.” Traits they analyze are bullying, disrespect, harassment, and bad attitude, among others.
Needless to say, this type of background check services is even more ethically questionable. Nor does it seem reliable from a practical point of view. Words in people’s posts can be taken out of context and grossly misinterpreted. Their pictures can be misjudged even more fatally. It is notoriously difficult to detect people’s intentions from their words or facial expressions for us, humans, evolutionally equipped with the theory of mind (ToM). How impossible this should be, then, for advanced artificial intelligence that is simply not programmed to attribute mental states – beliefs, desires, and emotions – to others?
The morality of background check services will continue to be debated. There is no denying that the background check often sheds light on people’s criminal past and thereby prevents us from making wrong connections, be they professional or private. Yet at the same time, companies performing the background investigation often reveal the information that people have the inalienable right to keep under wraps. Like any other moral issue, the background check has the potential to help or harm others and ourselves.