What if while conducting our everyday, digitally-connected social lives — shopping on Amazon, sharing photos on Facebook, networking on LinkedIn — we inadvertently share data with potential employers? What if our race, gender, ability and genetic information are integrated into employment decision-making software or help employers determine who to hire, fire, or promote? What are we to do if and when we find that these decisions disproportionately and adversely impact people and communities of color?
The laws designed to protect us from employment discrimination lag behind the data and technologies that increasingly drive employment decisions. As researchers, policymakers, technologists, employers, and citizens, we should be concerned about where, when and how big data, artificial intelligence, and the law intersect.
The Civil Rights Act of 1964 made it illegal for employers to discriminate against current employees or job applicants based on race, sex, color, national origin, and religion. Later amendments made discrimination on the basis of age, disability, and genetic information similarly illegal. These laws were created at a time when employers made employment decisions based on relatively limited information. Employers screened applicants based on information applicants supplied through an application, a personal interview, sometimes a personality test. Managers determined how much to pay, who to promote and who to fire by observing employees’ actions and interactions (and of course the dictates of the good ‘ol boys club).
This simplifies the situation, but the point is that employment decisions have historically been based on a limited set of known information that was generally supplied to employers by applicants and employees. To be sure, this information made it possible to discriminate, and many employers have done just that. In just the last twenty years alone, job applicants and/or employees have filed close to two million lawsuits with the U.S. Equal Opportunity Employment Commission. Discrimination is found in about a third of such cases.
One of the reasons findings of discrimination have been plentiful is that we have come to recognize employment discrimination when we see it. Plaintiffs can marshal evidence from witnesses or documents to demonstrate that racial animus motivated their firing, or kept them from being promoted, and the like. And when such explicit evidence isn’t readily available, a plaintiff can marshal data that show that a particular employment practice has a disproportionate, adverse impact on people of color or members of other protected groups.
But what if we don’t know or have any evidence that shows why an employer made a decision to fire, not to hire, pay or promote someone? Or, what if third-party data brokers and software developers mine troves of data, model that data, analyze and find patterns in that data, and develop a predictive algorithm that purports to tell you whether you “fit” a company’s workplace culture, how long you are likely to stay at a company before looking for another job, or whether you are likely to take significant amounts of time off for health reasons? What if I automate these employment decisions and sell the technology to employers? What if an employer uses the technology to do what it was designed to do without actually knowing how the technology was designed, what data it was based on, and what assumptions its developers made at every stage of its design and development process? What are we to do if/when we find that these types of decisions disproportionately and adversely impact people and communities of color? These are significant questions we must begin to ask and address in the age of “data-driven HR.”
On the evening of Thursday, March 28th, a panel of experts will take this discussion further. There, I will introduce four distinguished scholars affiliated with the Center for Critical Race & Digital Studies (an IHDSC-allied Center) — Ruha Benjamin, Tamara Nopper, Meredith Broussard and Kelli Moore — who will engage each other and a public audience – on these critical questions about big data, artificial intelligence, employment and the law. The event is part of a day-long symposium organized by Center for Critical Race & Digital Studies affiliate Rachel Kuo and sponsored by NYU’s Department of Media, Culture, and Communication. Visit the event website to RSVP.
Dr. Charlton McIlwain is Vice Provost for Faculty Engagement and Development, Professor in the department of Media, Culture, and Communication, and founder of the Center from Critical Race & Digital Studies.