Everyone is building AI hiring agents and showing them off. Some are genuinely impressive. A lot of them are already breaking EU law.
Look, we get it.
Someone posts a two-minute demo of an AI tool they built over a weekend. It reads a job spec, finds candidates on LinkedIn, screens 200 of them, scores each one, and lands a shortlist on your desk before you have finished your coffee.
The comments go wild. Fire emojis everywhere. Someone writes "this is the future of hiring" and gets 400 likes.
And honestly? It looks brilliant. We are not going to pretend it does not.
Here is the thing though. A lot of what is being shown off in those demos is already illegal in the EU. Not "grey area" illegal. Not "probably fine" illegal. Actually, formally, the-law-has-been-in-force-for-nearly-two-years illegal.
And the next major deadline? Four months away.
We are not here to rain on anyone's parade. We love what AI can do for hiring. Our whole business is built around it. But there is a big difference between AI that makes hiring genuinely better, and AI that quietly lands your company in serious legal trouble.
So here is what the EU AI Act actually says. Because your LinkedIn feed is not going to tell you.
WHAT IS THE EU AI ACT?
Think of it as GDPR, but for AI. It came into force in August 2024 and applies to you even if your business is based outside Europe. If you are hiring candidates in the EU, you are in scope.
It works on a risk scale. Spam filters sit at the bottom. Nobody cares about spam filters. At the top, in the high-risk category, is any AI that helps decide who gets hired, screened out, or ranked as a candidate.
That covers CV screening tools, candidate scoring systems, video interview analysis. Basically every tool being shared on LinkedIn with a fire emoji caption.
Get it wrong and the fines go up to €35 million or 7% of global revenue. Whichever is higher.
Those fire emojis are looking a little different now, right?
WHAT IS ALREADY BANNED
Since February 2025, some things have simply not been allowed. This is not upcoming regulation. It has been in place for over a year.
Emotion recognition in interviews. Any AI that watches facial expressions, tone of voice, or body language to judge candidate suitability. Found to be unreliable and biased. Banned outright.
Biometric categorisation. Using AI to infer personal characteristics from biometric data without a proper legal basis.
Hidden manipulation. AI that nudges or influences candidate behaviour without their knowledge.
We still see tools doing all three of these being shown off online. If you recognise your hiring stack in that list, the conversation with your legal team is overdue.
THE CORE RULE EVERYONE IS MISSING
Here is the part the LinkedIn demos consistently get wrong. And it is the most important bit.
AI can help you run a better hiring process. It cannot make the decision for you.
No algorithm decides who gets the job. No automated score rejects a candidate without a human genuinely reviewing it. No system fires off a hiring decision that a qualified person has not actually considered and signed off on.
The law calls this "meaningful human oversight." It means more than a human clicking approve on a list without reading it. The person in the loop needs to be able to question the AI's outputs, understand what drove them, and override them if something does not look right.
A fully automated pipeline — the kind that gets the most applause on LinkedIn — does not meet that standard.
THE AUGUST 2026 DEADLINE
Full high-risk AI obligations become enforceable on 2 August 2026. Note: this may be extended to late 2027 or 2028 while EU technical standards are finalised — but that is not a reason to wait.
From that date, if AI is shaping your hiring process, you need to have all of this in place:
- Tell every candidate when AI is involved in their evaluation
- Get consent before processing sensitive data
- Document how the system makes decisions
- Test regularly for bias
- Keep logs for at least six months
- Guarantee meaningful human oversight at every decision point
- Complete a fundamental rights impact assessment
BEST PRACTICE: AUTOMATE THE STRUCTURE, NEVER THE DECISION
The EU AI Act does not ban AI in hiring. It bans careless AI in hiring. Here is what a compliant, high-performing process actually looks like.
1. Use AI to reduce noise, not replace judgement.Volume screening, scheduling, candidate comms, initial qualification. Real value, lower risk.
2. Keep humans in every decision that matters.Shortlisting, interview assessment, offer decisions — made by people who understand the role.
3. Tell candidates what is happening.Be transparent about where AI is involved. Candidates expect it. The law requires it.
4. Document your process.Know what tools you use, what they influence, and where human oversight happens.
5. Push your vendors.You cannot outsource your compliance obligations. If their tools are not compliant, that is your problem too.
We built meritt around the belief that AI should make recruiters better, not replace their judgement. Every shortlist we produce comes with reasoning. Every candidate gets human contact and feedback. Every hiring decision is made by a person, informed by data - not driven by it.
If you are reviewing your sales hiring process and want to make sure you are doing it right, we would love to talk.

