About the Guest Dr. Lindsey Zuloaga is the Chief Data Scientist at HireVue, managing a team that builds and validates machine learning algorithms to predict job-related outcomes. Lindsey has a Ph.D. in Applied Physics and started her career as a Data Scientist in the healthcare space. At HireVue, she is working to completely transform traditional interviewing with a platform that focuses on understanding more of the candidate as a whole person, including interview responses, coding abilitixes, and cognitive skills as opposed to just the facts shown on a resume. Connect with Lindsey Zuloaga Key Takeaways In the current market, the good candidates are going quickly. HireVue’s tools allow businesses to speed up their hiring process, often reducing their time to hire from weeks or months down to just days, while getting qualified and diverse candidates. AI systems are biased and can mimic human biases in hiring, which can lead to unfair outcomes for diverse candidates. To prevent mimicking human biases from happening, companies must use transparent and fair algorithms. Hiring is becoming increasingly automated, which makes it easier for candidates to apply and interview, but also requires companies to be efficient in how they evaluate candidates. Quote “When candidates don’t have to drive 20 minutes or fly across several States to go to a job interview, they can complete everything virtually, which is kind of the paradigm that we’ve all shifted into in the last couple of years, and it saves carbon to do that.” – Lindsey Zuloaga Highlights from the Episode What is HireVue and what is your role? I am the chief data scientist at HireVue. HireVue is a hiring platform that allows companies to engage with candidates, screen them, assess them, and interview them. Right now, speed is the name of the game. In the current market, the good candidates are going quickly. Our customers have hundreds or thousands of applicants for a position. So our tools allow them to speed up their process, often reducing their time to hire from weeks or months down to just days while getting qualified and diverse candidates. A big part of what we do and our biggest principle is consistency. So we help our customers build good interviews that measure attributes that are important for a particular job, assess all the candidates in the same way, and make sure that they get a chance. Tell me about your history and what led you to work on this problem. I was always good at math, but I didn’t realize how important it was until I took physics. I continued in physics and ended up going on to get a Ph.D. as well. And when I started grad school, I want to do experiments because I didn’t want to sit at my computer all day. And much to my surprise, the part of my work that I ended up enjoying the most was sitting at a computer writing code and analyzing data. I transitioned to the industry, which was way more challenging than I anticipated. The modern-day job application process is painful. Transitioning from one field to another, being left without connections… I felt strongly that that process was very broken when I did get a job in data science, I started off working in the healthcare space but then transitioned to working in hiring because it was near and dear to my heart after seeing firsthand the problems that are there. Bias is a big problem in hiring, how do we prevent AI systems from mimicking human bias? Human bias in hiring is a well-established issue. We’ve heard stories like how Amazon created a resume screener trained on sexist data from who was previously hired. If you approach these kinds of problems without an explicit way to combat bias that this can certainly happen. One benefit to using algorithms is that we can probe and tweak them in a way that’s impossible to do with human minds. So if we do see that there are certain attributes in an evaluation, like in evaluating a resume or an interview, transcript, etc, that result in certain groups doing worse or getting hired less or scoring lower on some assessment, we can remove those attributes from the assessment. So concretely, a naive algorithm would be incentivized to predict an outcome. But we can add a penalty to that optimization for violating some fairness metrics in hiring, we talk about the most group differences in outcomes. But I will say there are many different fairness metrics, and there are trade-offs to whichever you choose. What areas of hiring are being disrupted? Can you compare the old way of doing things to this new paradigm? Most jobs didn’t have a huge number of applicants. That’s changed a lot in the internet age, especially with remote work. We have customers, one airline, in particular, who receives an average of 20,000 applicants for one position, we have a retail customer who did 50,000 interviews in one weekend to staff stores all across the country. That is just a level of volume that did not exist before. So we need different tools. But it’s also an opportunity to make things consistent and less biased. So we have a lot more data. And we can get quantitative about what we mean by fairness, and how we know if principles of fairness are being violated. Another big area is the flexibility of being able to take an interview whenever it’s convenient for you. All of that is being automated in a way that makes things much faster for candidates and companies and saves energy. What are some of the biggest challenges with this technology? Hiring is really difficult, even for humans. So it’s really hard to predict what someone is going to do at a job just from an interview. So we try to get closer and closer to what is the essence of the job and how we can do some kind of job tryout. There are a lot of technologies going that way, but you also need to balance that with respecting people’s time. If they’re applying for many jobs, and they have to spend a lot of time on each application, they might drop the ones where they’re being asked to do too much. It’s trying to evaluate people efficiently, but around things that are related to the job. In my five and a half years at HireVue, one of the biggest challenges is around the perception of AI. So people hear about what we’re doing. And they often assume the worst, we’ve done a lot of work to be more and more transparent. A big push for us over time is to communicate clearly. Explainability and transparency are big topics right now. What are your views on expectations of AI being explainable and transparent? Explainability and transparency mean something different to different people. So as data scientists, we often look at the model and understand what the model is doing. That can be difficult to interpret. But ultimately, we can go into detail and this is often what people are interested in. It is understanding how the system was built and trained. We recently released an explainability statement in the last couple of weeks, we worked with a third party in the UK called Best Practice AI and dug into our use of AI. It’s a good resource for customers, candidates and anyone curious about our technology. We’ve shifted to just more and more transparency. The target metric that we’re training our algorithms to predict is how a trained evaluator would evaluate your answer. We saw that people assumed the worst. So for us, it’s been very well-received and positive to just open everything up and talk freely about our technology. What legal frameworks and standards are there in this area? I mentioned the Equal Employment Opportunity Commission, which focuses on how to evaluate pre-hire assessments or hiring decisions. We also go to the Society of Industrial-Organizational Psychology (SIOP) principles, which are very central to what we do because Industrial-Organizational psychologists have been building pre-hire assessments forever. I think a lot of people don’t realize that the same principles apply whether or not you’re using AI, like some people think that hiring using AI is just completely unregulated, which is not true. But we’re seeing a lot of like legislation conversations coming up around how AI is used. And there are a lot of additional concerns that there could have a lot of bias on a large scale. As I said, we do have a lot of outside advisors to keep us up-to-date with that legal and ethical landscape. I think a lot of companies and countries have kind of come out with vague principles around AI use, and a lot of them are not as quantitative as we would like. As we move forward, we kind of just want to be a part of the conversation, whether we get new regulations on AI used in hiring from the federal level, or if there are just going to be many different states with their laws. Is there a book, blog, newsletter, website, or video that you recommend to our listeners? The Ethical Algorithm by Michael Kearns and Aaron Roth Blog – DeepMind Shout-outs Cathy O’Neil – Mathematician, data scientist, and author Sarah Papazoglakis – Privacy & Trust Product Strategist (VR/AR), Reality Labs at Meta
Play Podcast Episode Podcast B2B Data Exploring the Role of AI and First-Party Data in the Hospitality Industry Sunny Side Up