06
Feb
0 No comments

In 2014, Amazon got itself a new recruiting tool that used artificial intelligence (AI) to recruit employees. “Everyone wanted this holy grail,” an Amazon spokesperson said and Reuters reported. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.” It would also mean that every resume would get full attention rather than get cast aside because the person reviewing it was tired, distracted, or didn’t like the applicant’s favorite sports team.

To enable the AI machine to do a proper job of sorting the resumes, Amazon uploaded the resumes submitted to the company for the previous 10 years. They trained the computer models to look for patterns of the qualities that made for a successful employee.

By the next year, 2015, the problem became obvious. Because the vast majority of resumes came from men, the computer penalized any resume that included the words woman or women, such as “women’s rowing team” or “women’s tech club.” It also downgraded any resume from graduates of two all-women colleges. Amazon had left itself wide open for a EEOC complaint.

By the start of 2017, Amazon had disbanded the team because, according to the Reuters article, “executives lost hope for the project.”

Even with this warning in place, Reuters reported, “Some 55 percent of U.S. human resources managers said artificial intelligence, or AI, would be a regular part of their work within the next five years.”

Software companies abound with promises that employers will have the ability to recruit “fairly” and quickly, avoiding biases of HR people who might reject an applicant because of some inconsequential prejudice. But in order to recruit and screen quickly, the computer has to be told what criteria to judge applicants on. Therein lies the rub. Computers do what they’re told, and if you tell them to do something illegal or unethical, they do exactly that, no questions asked because they don’t know about laws and ethics.

The old computer maxim “Garbage in Garbage Out” rules every item of computer output. If, as in Amazon’s case, the computer’s garbage-out criteria dismisses applicants who are members of protected classes, in their case women, the employer ends up on the receiving end of a fair employment complaint or a rental owner of a Fair Housing complaint.

For example, if they tell the computer to only look in certain zip codes for acceptable employees, that could eliminate any applicant living in a zip code with a primarily minority population. Tell the computer to only look for applicants who attended specific colleges and universities, and
that would most likely eliminate any applicant who graduated from a predominately Black college. Advertise job openings or vacancies only on Facebook, and since Facebook’s users tend to be younger, it could have the effect of eliminating employees more than 40 years old, a protected class.

It has to do with Disparate Impact, which Britannica.com defines as the “judicial theory . . . that allows challenges to employment or educational practices that are nondiscriminatory on their face but have a disproportionately negative effect on members of legally protected groups.” The key is it’s not what you meant; it’s what somebody thinks you might have meant that guides government investigations into illegal hiring and renting processes. You had no intention of illegally discriminating against anyone, but what you told the computer to do was discriminate with a disparate impact on some protected class.

AI can be a useful tool. It comes in three flavors, narrow AI, Generalized AI, and Artificial Super Intelligence.

Narrow or “weak” AI does specific tasks, ones you instruct the computer to do. It could be a robot in a factory, music selection, think Alexa and Siri that is told to find out what you prefer in shopping and music, and a neural network that runs a power grid, for example. Neural networks process information, inspired by how brains’ neural systems process data. Artificial intelligence tries to simulate biological neural networks, so they can be more like a human brain.

Generalized AI emulates the human mind and learns and remembers from what it does and can incorporate what it learned and seem to think on its own. In its infancy now, it promises to become even more “useful” in learning what works and what doesn’t. Faced with an unfamiliar task, it can figure out and remember how to do it most effectively. Designed to be able to learn new tasks and adapt to new situations, generalized AI is meant to be more intelligent than narrow AI and is capable of making decisions and acting autonomously. Examples of generalized AI include facial recognition, autonomous driving, and natural language processing, as well as more complex tasks such as machine learning and deep learning.

Artificial Super Intelligence is maybe in the future. Its current developmental progress, though, resembles a fully loaded freight train picking up speed barreling downhill, all but unstoppable. That would be or will be a machine with intelligence equal to that of humans and a self-awareness giving it the ability to solve problems, learn, and plan for the future. Search for LAMda for details. And think HAL in “2001 Space Odyssey” and Skynet in the “Terminator” movies. The doomsday scenarios have Super AI taking over the world and wiping out humanity.

AI can be a valuable tool to use in screening and recruitment. Presumably, employers and landlords have specific criteria for whom they will accept as employees or tenants. Those need to be programmed into the AI platform so they can quickly qualify or disqualify applications. Chances are you don’t have the training or capabilities yourself, so you will have to employ an AI software company to set it up for you. That’s where the peril lurks. If they set it up so it illegally discriminates against protected classes of people, you are responsible just as you are for the conduct of any contractor such as a carpenter who repairs the front steps on a rental. If the carpenter installs the steps unsafely, it’s your responsibility even though the carpenter did the work. Likewise, if a software company installs an AI recruiting system that discriminates against a protected class, it’s your fault.

How can you tell if the criteria inputted are fair to everyone? Good question. You have to do considerable “what-if” thinking. What if I were a black person? Would I feel as if I could apply for that position? What if I were a single woman? Would I feel as if I could apply to rent that unit? What if I were 45 years old and looking for work? Would I see the ad on Facebook since I don’t use Facebook but my kids do? Would it were that simple.

Amazon stopped using its biased recruiting system because they couldn’t figure a way to make AI recruiting fair even though they have some of the most highly skilled programmers and engineers in the world. You probably don’t have the equivalent programmers. What an employer or rental owner must do is look at lots of what-ifs and think critically about whom their recruiting and screening efforts end up being aimed at. Who might be left out?

Comments are closed.