Pushy, slow, stupid, flaky.
If you do a Google search for “Why are recruiters so…”, these are the autocomplete suggestions you’ll see. The industry doesn’t exactly have the best reputation.
If you thought tech recruiters sucked, AI recruiters are even worse. To be fair, they’re not always entirely at fault. AI is a broad, complex, technical, and ever-evolving topic which even experts in the field constantly debate. If researchers and engineers argue about what constitutes “AI”, then recruiters don’t stand a chance.
But, I repeatedly see the same avoidable mistakes over and over from technical recruiters peddling fancy-sounding positions like “Global Head of AI”, yet don’t know how AI is being used at the companies they represent and haven’t bothered to carefully research their prospects’ technical and professional backgrounds.
Here are frequent faux pas companies make when recruiting for AI roles. Now you can avoid making these mistakes yourself!
Common Mistakes Recruiters Make When Hiring AI & Machine Learning Talent
1. Use “Artificial Intelligence” as a vague, unspecific term
In the early 1900s, companies hired for a “Vice President of Electricity.” As silly as this sounds today, electricity was a shiny new technological invention and a distinct job emerged to figure out what to do with it. Once commoditized, the role disappeared.
Just imagine trying to hire a “Vice President of Internet” right now. The same problem exists with titles like “Global Head of AI.”
“Artificial intelligence” refers to a wide range of technologies, tools, and techniques that can be applied to nearly any function in any company. Many companies have used such techniques for years, long before the recent media hype, which has resulted in many popular misconceptions.
Many practitioners prefer to refer to their work as “machine learning”, since the term “artificial intelligence” is often confused with “artificial general intelligence”, which refers to human-level or superior intelligence. Most researchers in the space believe we’re still far from achieving this milestone in our machines.
Engineers and researchers also specialize, so avoid jamming your recruiting materials with “AI” buzz words and positive adjectives while leaving out actually useful information. Are you a retailer looking for a computer vision specialist to make sense of user generated content around your brand and product? Or are you looking for a natural language processing (NLP) / natural language understanding (NLU) expert to turn unstructured textual company data into a query-able knowledge base? You may need very different profiles for these roles.
2. Have no clue what business problems you’re trying to solve
Hiring a “Chief AI Officer” or “Global Head of AI” is trendy these days. I actually agree that you’re unlikely to successfully pilot and implement enterprise-scale AI if you don’t have a strong CXO leading the charge.
That said, you need to clarify your business problems before you leap into technical solutions. What key enterprise functions do you believe AI technologies can accelerate? What specific manual tasks do you wish to automate away? How do you want to enhance employee decision-making and productivity? How can AI help you better serve your top customers?
You can evaluate technical feasibility only when you’ve established clear goals. Do you have highly structured historical data with predictable schemas? Perhaps machine learning techniques can reveal new patterns and insights. Do you have loads of image and video content you’d like to tag and understand? Perhaps modern deep learning techniques which accurately segment and classify visual content can be helpful. Or do you just have a bunch of data you have no clue what to do with?
In many cases, you’ll realize you don’t even need “AI” or machine learning, but rather simpler automation techniques like robotic process automation (RPA) or scripted chatbots.
Without clear directives, your AI talent will be wasted.
3. Overplay your company’s AI credentials and capabilities
I once spoke to an executive at a global consumer packaged goods (CPG) company who proudly claimed to be the “leader in AI” in his industry. Why? He managed to hire an external vendor who provides commodity speech-to-text transcription. And the process only took a few months!
Buying off-the-shelf software hardly constitutes “AI innovation” and claiming so will get you ridiculed by the AI community and practitioners who actually understand the nuances of applied machine learning. The pressure to be seen as “innovative” is overwhelming, but what you proclaim to the media usually backfires on technical audiences who quickly sniff out your bullshit.
4. Suck at data & analytics, but think you’re ready for AI
If your organization is traditionally non-technical, get ready to transform into a technology company.
If you lack an existing technical team, don’t have the infrastructure in place for cleaning and sharing data, or have an executive culture that is intuition-driven rather than data-driven, no brilliant “Chief A.I. Officer” will be able to do you any good.
Companies that succeed in AI initiatives typically have strong technical leadership, such as a Chief Technology Officer (CTO) or Chief Data Officer (CDO) with proven competence in applied machine learning. Jeremy Howard, former President of Kaggle and author of Designing Data Products, warns that strategy, operations, and technical know-how are all critical for successful AI products. “You need to build your whole team around the goal of making data-driven decisions, rather than expecting you can magically hire an ‘AI person’ and suddenly turn into the next Google,” he emphasizes.
Do you have a strong analytics culture, process, and tools that turn messy data into reliable information and insights? Has your company successfully automated processes with non-AI methods? Do you have business analysts, data scientists, and engineers who can be rapidly trained on new techniques?
If your answer to these questions is “No”, you’re probably not ready to build AI yourself and must resort to leveraging third-party AI vendors. Just don’t pretend you’re a “leader in AI” when you’re just buying other people’s software.
5. Use uninformed external recruiters instead of internal experts
When Steve Jobs wanted to hire someone important, he would personally call them. Who wouldn’t pick up the phone when Jobs is on the line?
Elon Musk backed OpenAI, an AI research organization which attracts some of the smartest experts in the field. He can do that because he’s Elon Musk.
You might not be Steve Jobs or Elon Musk, but that’s no excuse to outsource critical technical recruiting to external consultants who have no clue what they’re doing, which leaves a terrible impression of your company on the highest value prospects.
Have your leading internal AI champion, whether it be your CTO, CIO, CDO, or another credible technology leader, reach out to important recruits. If you don’t have the right internal champions, but you have billions of dollars, you can copy Marc Benioff’s strategy of big spending to acquire AI talent.
6. Hire a homogenous AI team without diverse backgrounds
Our creations adopt our biases. Homogenous thinking in technology teams has already led to many epic public fails, such as Google Images tagging black people as gorillas or passport checkers rejecting Asian photos for having their “eyes closed”. The power of AI only amplifies this problem.
Hiring diverse teams is no easy task. AI educator and deep learning researcher Rachel Thomas points out that many organizations are oblivious to toxic cultures which turn off women and underrepresented groups. Diversity also extends beyond surface characteristics such as race or gender to include personality, belief systems, work and communication styles, experiences and competencies, and many other traits which can be difficult to identify and assess.
This doesn’t mean you shouldn’t try. If you don’t want to end up like Uber, prioritize diversity and inclusion from day one and be responsive to feedback on your existing culture. “I like to hear what companies are doing to take their ethical responsibility seriously (there are lots of risks related to bias in AI) and to promote inclusion (e.g. auditing employee promotion and retention rates),” suggests Thomas.
7. Fail to do your basic homework
Recruiters almost never carefully read LinkedIn profiles or academic research papers to familiarize themselves with a prospect’s work and contributions before spamming their database with irrelevant opportunities. A consumer internet company repeatedly pinged me about a junior role in the wrong department, and a friend of mine who’s VP of Engineering at a well-known company was invited to apply for a barista job.
This blows my mind, because background research should be mandatory homework for recruiters.
The head of a leading AI research lab recently told me that American universities only graduate ~100 people every year he considers qualified to hire. The community is also quite tight, so word gets around fast on crappy hiring practices.
Do your homework and don’t piss off your prospects with your poor first impressions. You might not have anyone left to recruit!