Hello world,
My name is Andrew Losowsky, and I’m Product Director & Editor at The Markup and CalMatters. A few months ago, we wanted to hire a new engineer. Hiring is always a lengthy process, but this time I had to wade through what felt like an ocean of generative AI slop. Fake and exaggerated resumes have always existed, but now, thanks to the rise of AI tools, it’s incredibly hard to know who is even real.
I get why people use AI tools to find work. Job hunting is exhausting. Every employer wants to feel special, but applying takes so long that, according to a recent report, 65% of jobseekers use AI automation tools to find work, including some that promise to “tailor your resume for each role.” And why not, if employers are using AI to screen resumes anyway? (We don’t do this.) However, there’s also been a lot of reporting over the past year about fake candidates for technical roles.
Within 12 hours of posting the role, we received more than 400 applications. At first, most of these candidates seemed to be genuine. However, as the person who had to read them all, I quickly saw some red flags, which were all clear indicators of inauthenticity:
- Contact information, such as email addresses and phone numbers, was repeated across multiple applicants, although their names didn’t always match the names in the email addresses. In at least one case, two totally different resumes were submitted under the same name, mailing address, and phone number.
- Some email addresses were formatted in a particular way, with a full name and a seemingly random number, often followed by
<a href="mailto:.dev@gmail.com">.dev@gmail.com</a>. - Mailing addresses were located in commercial-only areas but weren’t post office boxes.
- Resumes had identical design patterns, including bolding certain phrases connected to skills and experiences.
- LinkedIn addresses were either broken, led to near-empty profiles, or contained profiles listing different employers from the resume.
These suspicions were reinforced by the answers that these inauthentic candidates gave to questions on our application form. As part of our hiring process, we asked applicants why they wanted to work with us and which projects they were most proud of. We didn’t prohibit using AI to help write applications, but what we received went a lot further than using it for guidance around phrasing or language.
- Responses to “Why do you want to work with us?” followed a near-identical four-sentence pattern with minor variations:
“I want to work with you because…” with a summary of the first section of our About page
“As an engineer, I enjoy…” with a summary of the top of the job posting
“I’m particularly interested in…” with a summary of the second section of our About page
“While I haven’t worked in journalism before…” with a summary of the rest of the job posting
- A few applications even included “ChatGPT says” in their answers, without acknowledging why or how they’d used ChatGPT.
- Most obviously suspect: In several resumes, the work didn’t correspond with that of the stated employer but almost perfectly matched our job description. One applicant reported working for a trucking company, and, as part of their job, they “often worked closely with journalists to create data dashboards and visualizations.” In the most extreme case, one person claimed they had built our website and Blacklight tool (they hadn’t).
After less than a day, we removed our ad from ZipRecruiter, Glassdoor and <a href="http://Indeed.com" rel="nofollow">Indeed.com</a>, and relied on our own outreach to get applicants. Following this, the flood of probable inauthenticity slowed to a trickle.
I was curious about our probably fake applicants, so I followed up with some of them.
Artificial Intelligence
His students suddenly started getting A’s. Did a Google AI tool go too far?
Some teachers say that AI tools, particularly Google Lens, have made it impossible to enforce academic integrity in the classroom — with potentially harmful long-term effects on students’ learning.
Several candidates said they had worked for PixelFyre Code Labs, “an information business helping all businesses succeed,” whose web address goes nowhere and seems only to exist on LinkedIn. One of them said he couldn’t discuss his past journalism work because “most of the work was under NDAs or white-label partnerships.” I tried to set up a phone call anyway, but he never responded to multiple emails. “Due to NDA” was a common refrain from suspect candidates when questioned about their experience.
One person’s resume showed relevant experience but contained several of the red flags I mentioned earlier. By chance, I happened to have a professional contact at a company listed on his application – and, to my surprise, my contact not only confirmed the candidate’s past employment but also highly recommended him, so I set up a video interview.
During the interview, the candidate’s answers were evasive and generic. When asked why he wanted to work for us, he said, “I want to work for you because you’re doing cutting-edge technologies… You’re a fast company growing ‘fastly’, and I’m looking for new experiences.” He said he lived in Morrisville, N.C., and when I asked what he liked about the town, he said “The temperature — the weather is amazing.” Any specific places or things to see there? “No, just the weather.” What’s the weather like today? “It’s cold all the time, it feels freezing today, yes.” The temperature in Morrisville that day was 82 degrees.
I later shared a screenshot I had taken of the candidate with my contact at the publication. “That is definitely not the same [person] we’d hired,” he said.
Our job ad also got picked up and shared by scammers. Someone made a fake email address similar to ours, then sent generic technical “tests” containing our logo to jobseekers, while linking to our job ad. Completing these tests led to a fake contract signed by someone claiming to be our CEO – it was at this point that the scammers requested financial information, saying they needed it to issue payments. I shudder to think what would have happened to anyone who handed over their banking details. Based on what I know about AI, creating these scams at scale is probably pretty easy.
Despite our travails, we got lucky. We had a stellar shortlist of actual people whose experiences matched their applications. We landed a fantastic new team member (welcome, Matt!), and learned a lesson: if you’re hiring a remote engineer, be prepared to wade through a lot of slop.
Sincerely,
Andrew LosowskyProduct Director & Editor
The Markup and CalMatters
