The Robots are Coming (to eliminate employment discrimination)

Like(31)

As artificial intelligence continues to become more pervasive and often useful in our society, a relatively unexplored area is its capacity to eliminate subconscious bias in the hiring process. In this article, Corey Goerdt cites previous examples of employer liability cases and lays out the potential benefit artificial intelligence poses for employers when hiring new employees.

The Robots are Coming (to eliminate employment discrimination)

Artificial intelligence has made our lives easier in many ways. New smartphone technologies allow our everyday devices to do everything from inferring where we live and work to organizing our photos through facial recognition technology. Amazon Echo, a voice command device that came out this summer, was ridiculed by some, but made others nervous with its ever-listening, subversive technology called Alexa.

As technology improves, AI programs like Alexa may also be the key to eliminating discrimination in the workplace. Overt racism or sexism in hiring is pretty easy for employers to spot and stamp out. Too often, though, application and hiring processes are riddled with unintentional or subconscious flaws that leave employers with a potentially costly lack of diversity.

HR consultant Howard Ross explains how changing the hiring process for elite orchestras helped increase the number of female musicians in ensembles. In 1970, women accounted for only five percent of musicians in the America’s top orchestras. Now, the League of American Orchestras estimates that women account for nearly half of all orchestra members.

Though the difference reflects a larger societal change, orchestras took concrete steps to remove bias from hiring decisions. Most importantly: curtains. After an African-American instrumentalist sued the New York Philharmonic in the 1970s, every major American orchestra required musicians to play behind a curtain during auditions. Current orchestral executives attribute today’s parity to that single change.

Not every employer, however, can combat discrimination by lowering a curtain, sitting back, and listening to applicants hit the high notes.

That’s where AI comes in. A host of startups are tapping into employers’ desire to diversify their workforces and remove unintentional bias and discrimination. Companies like Twitter, Microsoft, and Hilton Hotels have started using technology to remove the human bias inherent in subjective hiring decisions. For instance, technology developed by HireVue allows companies to make hiring decisions by analyzing speech patterns and physical gestures in standardized video interviews. HireVue also uses its expansive database of video interviews to evaluate hiring decisions based on subsequent performance of hired candidates.

Here’s the problem that companies are addressing: having a homogenous workforce isn’t just bad for business; it could also lead to serious legal problems. Most people understand that it’s illegal to base hiring and firing decisions on factors like race, sex, age and national origin. What some may find surprising, though, is that plaintiffs can win big in cases where everyone agreed that the employer didn’t have any intent to discriminate.

For example: Firefighters in Akron, Ohio, won a $1.89 million verdict when the jury found that the city’s promotional exams unlawfully discriminated against African-American firefighters applying for promotion to lieutenant. The same jury found that the city discriminated against Caucasian police officers who took a separate test to be promoted to captain.

In that case, which is still winding its way through the courts on appeal, the plaintiffs didn’t win because the city intentionally discriminated against both black and white firefighters. They won because the design of the city’s tests had a “disparate impact” on protected classes. See Howe v. City of Akron, 2015 U.S. App. LEXIS 16529 (6th Cir. Sept. 17, 2015).

That may be confusing – and for good reason. Disparate impact discrimination cases can be notoriously complex. As opposed to disparate treatment claims, which involve an employer intentionally discriminating against an individual or group of individuals, disparate impact claims focus on an employer’s seemingly neutral practice or policy that negatively impacts a protected class. Critically, one defense to a disparate impact claim is that the challenged qualification or test (in Akron, the promotional exams) is predictive of, or significantly correlated to, important requirements of the position or safe and effective performance of the position. Albemarle Paper Co. v. Moody, 422 U.S. 405, 431 (1975).

All this is to say that employers can run into huge issues – in the court of public opinion and actual courtrooms – when hiring and promotion systems create unintentional disparities in their workforce. But those issues could be neutralized by high-tech systems and analyses. If companies like HireVue can use their growing databases to reduce bias in hiring, it is not inconceivable that AI could help employers build hiring and promotions systems that either eliminate problematic disparate impacts or provide a defense to employers if a disparate impact claim is brought.

Some of America’s largest employers are already investing in these solutions. Maybe it’s time for your organization to consider implementing similar systems to combat subconscious bias in the workplace. History shows us that these cases can be both costly and somewhat arbitrary, so having proof of unbiased procedures could protect your company from claims of unlawful disparate impacts.

Organization Type: Business Enterprise
Article Type: Expert Article
Industry: Other
Focus: General Business
Location: Midtown Atlanta

Comments are closed