Alejandro Saucedo听says he听could spend hours talking about solutions to bias in machine learning algorithms.听听
In fact, he has already spent countless hours on the topic via talks at events and in his day-to-day work.听
It鈥檚 an area he is uniquely qualified to tackle. He is engineering director of machine learning at London-based Seldon Technologies, and chief scientist at The Institute for Ethical AI and Machine Leaning.听
His key thesis is that the bias which creeps into AI 鈥 a problem far听from hypothetical听鈥撎cannot听be solved with more tech but with the reintroduction of human expertise.听
In recent years countless stories detail how AI decisioning has resulted in women being less likely to qualify for loans, minorities being unfairly profiled by police, and facial recognition technology performing more accurately when analysing white, male faces.听
鈥淵ou are affecting people’s lives,” he听tells听老九品茶Cloud, in reference to听the magnitude听of听these automated decisions听in the security and defence space, and even in the judicial process.听
Saucedo听explains that machine learning processes are, by definition, designed to be discriminatory 鈥 but not like this.听
“The purpose of machine learning is to discriminate toward a right answer,鈥 he said.听
“Humans are not born racist, and similarly machine learning algorithms are not by default going to be racist. They听are a reflection of听the听data ingested.”听
If听algorithms adopt human bias from our biased data, removing bias听therefore suggests听the听technology has great potential.听
But the discussion听often听stops听at this theoretical level听鈥 or听acts as听a cue for engineers to fine-tune the software in the hopes of a more equitable outcome.听
It鈥檚 not that simple, Saucedo suggests.听
鈥淎n ethical question of that magnitude shouldn’t fall onto the shoulders of a single data scientist. They will not have the full picture in order to make a call that could have impact on individuals across generations,鈥 he says.听
Instead the approach with the most promise takes one step further back from the problem.听
Going 鈥榖eyond the algorithm鈥, as he puts it, involves bringing in human experts, increasing regulation, and a much lighter touch when introducing the technology at all.听
鈥淚nstead of just dumping an entire encyclopaedia of an industry into a neural network to learn from scratch, you can bring in domain experts to understand how these machines learn,鈥 he听explains.听
This approach allows those making the technology to better explain why an algorithm makes the choices it does 鈥 something which is almost impossible with the 鈥榖lack box鈥 of a neural network working on its own.听
For instance, a lawyer could help with the building of a legal AI, to guide and review the machine learning’s output for nuances 鈥 even small things like words which are capitalised.听
In this way, he says, the resulting machine learning becomes easier to understand.听听
This approach means automating a percentage of the process, and requiring a human for the remainder, or what he calls ‘human augmentation’ or ‘human manual remediation’.听听
This could slow down the development of potentially lucrative technology battling to win the AI arms race 鈥 but it was a choice he said would ultimately be good for business and people.听
“You either take the slow and painful route which works, or you take the quick fix which doesn’t,鈥 he says.听
Saucedo is only calling for red tape which is proportionate to its potential impact. In short, a potential ‘legal sentencing prediction system’ needs more governance than a prototype being tested on a single user.听
He says听anyone building machine learning algorithms with societal impact should be asking how they can build a process which still requires review from human domain expertise.听
“If there is no way to introduce a human in to review, the question is: should you even be automating that process? If you should, you need to make sure that you have the ethics structure and some form of ethics board to approve those use cases.”听听
And while his premise is that bias is not a single engineer’s problem, he said that this does not make them now exempt.听听
“It is important as engineers, individuals and as people providing that data to be aware of the implications. Not only because of the bad use听cases, but听being aware that most of the incorrect applications of machine learning algorithms are not done through malice but lack of best practice.”听
This self-regulation might be tough for fast-paced AI firms hoping to make sales, but conscious awareness on the part of everyone building these systems听is a professional responsibility,听he says.听听
And even self-regulation is only the first step. Good ethics alone does not guarantee a lack of blind spots.听
That’s why Saucedo also suggests external regulation听鈥撎and听this doesn’t have to slow down innovation.听听
“When you introduce regulations that are embedded with what is needed, things are done the right way. And when they’re done the right way, they’re more efficient and there is more room for innovation.”听
For businesses looking to incorporate machine learning, rather than building it, he points to The Institute for Ethical AI & Machine Learning鈥檚 鈥楢I-RFX Procurement Framework鈥.听
The idea is to abstract the initial high-level principles created at The Institute, such as the human augmentation mentioned earlier, and trust and privacy by design. It breaks these principles down into a security questionnaire.听听
“We’ve taken all of these principles, and we realised that understanding and agreeing on exact best-practice is very hard. What is universally agreed is what bad practice is.”听听
This, along with access to the right stakeholders to evaluate the data and content,听is enough to sort mature AI businesses from those “selling snake oil”.听听
The institute is also contributing to some of the official industry standards that are being created for organisations like the police and the ISO, he explains.听
And the work is far from done 鈥 if a basic framework and regulation can be created with enough success to be adopted internationally, even differing Western and Eastern ethics need to be accounted for.听
“In the West you have good and bad, and in the听East听it is more about balance,” he says.听
There are also the differing concepts of the听self versus听the community. The considerations quickly become philosophical and messy 鈥 a sign that they are a little bit more human. 听听
“If we want to reach international standards and regulation, we need to be able to align on those foundational components, to know where everyone is coming from,鈥 he says.听


