Thu, Nov 21 2024

Francine Bennett employs data analytics to increase the accountability of AI

March 04, 2024
6 Min Reads

In an effort to provide women academics and those with an interest in AI with the recognition they deserve, TechCrunch is launching an interview series featuring inspiring women who have made significant contributions to the AI revolution.

AI

As the AI boom continues, we plan to publish a number of stories throughout the year that highlight important work that is frequently overlooked. View further profiles by clicking this link.

Francine Bennett was a founding board member of the Ada Lovelace Institute and is the acting director of the organization at the moment. She used artificial intelligence (AI) to develop medical remedies for uncommon disorders while working in biotech. She is a founder trustee of DataKind UK, which provides data science support to British charities, and she also co-founded a data science firm.

Could you briefly explain how you got started in AI? What drew you to this area of study?


I enjoyed playing around with computers, but I wasn't particularly interested in applied math when I first started out. I felt that applied math was just calculations and wasn't very intellectually stimulating. I became interested in AI and machine learning later on, when it became clear to me and others that, with data becoming more abundant in many contexts, there were exciting new opportunities to use AI and machine learning to solve a variety of problems in novel ways. These opportunities were far more interesting than I had realized.

Which of your AI-related works are you most proud of?


Using machine learning (ML) to try and find previously unnoticed patterns in patient safety incident reports at a hospital to help the medical professionals improve future patient outcomes is one of my most proud projects; it's not the most technically complex, but it really helps people. And I'm honored to speak at gatherings like the UK's AI Safety Summit this year, where I advocate for prioritizing people and society before technology. I believe it can only be done with authority because I have worked with technology, am enthused about it, and have a strong understanding of how it impacts people's lives in real life.

How do you deal with the obstacles presented by the male-dominated IT sector, which also affects the male-dominated AI sector?
mostly by deciding to work with and in environments that value the individual and their abilities over gender, and by attempting to utilize my influence to help establish that as the standard. Working in diverse teams is something else I try to do whenever I can. Being a part of a balanced team instead than a standout "minority" creates a whole different environment and increases the likelihood that everyone can realize their full potential.More generally, it's clear that, in order for AI to be successful, people from all walks of life must be involved in its development and shaping because it is so complex and likely to affect a wide range of people, particularly those in marginalized populations.

How would you advise women looking to pursue careers in artificial intelligence?


Have fun with it! You'll always find something worthwhile and hard to work on in this fascinating, intellectually stimulating, and constantly evolving field. There are also a ton of significant applications that no one has even considered yet. Additionally, try not to worry too much about having to know every single technical detail since, quite honestly, nobody does. Instead, focus on an area of interest and work your way up from there.

What are some of the most important problems that AI will face in the future?


The current problem, in my opinion, is that we don't have a common understanding of what artificial intelligence (AI) can and cannot achieve for our society. There is a lot of technological innovation happening right now, which probably has a lot of negative effects on the environment, the economy, and society. There is also a lot of excitement about using these new technologies without fully understanding the risks or unexpected consequences involved. The majority of those developing the technology and discussing the dangers and repercussions belong to a relatively small group of people.We currently have a window of opportunity to choose our goals for AI and work toward achieving them. We can reflect on the development of other technologies and how we handled it, or we can reflect on what we would have done differently. For example, what are the AI equivalents of crash-testing new cars, holding restaurants accountable for food poisoning accidents, consulting those affected when obtaining planning permission, or appealing an AI decision in the same way as a human bureaucracy?

Which concerns should users of AI be aware of?


When using AI technology, I would like people to be self-assured in their abilities and capabilities as well as articulate about their goals for AI. Though AI is really just a set of tools, it's tempting to think of it as something unknown and unpredictable. As such, I want people to feel empowered to make decisions about how they use AI. However, the government and business sector should be establishing the necessary frameworks so that anyone utilizing AI may feel secure. This shouldn't solely be the burden of the technology users.

How best to develop AI in a responsible manner?
At the Ada Lovelace Institute, which seeks to use data AI to benefit people and society, we frequently pose this question. There are countless approaches you may take on this difficult problem, but in my opinion, there are two particularly important ones.

The first is to have the flexibility to occasionally pause or not build. We frequently witness AI systems developing quickly, with the developers attempting to add "guardrails" later on to lessen issues and risks, but never putting themselves in a position where halting is an option.

The second is to actually get involved and make an effort to comprehend how different types of individuals will use what you're doing. You have far greater chance of creating something that genuinely solves a problem for people, based on a shared understanding of what good would look like, and avoiding the negative, such as unintentionally making someone's life worse because their daily existence is simply very different from yours, if you can truly empathize with their experiences.

For instance, in order for developers to obtain access to healthcare data, they must complete an algorithmic impact assessment that was created in collaboration with the NHS and the Ada Lovelace Institute.This means that before implementing an AI system, engineers must evaluate the potential societal effects of it and consider the real-world experiences of any affected individuals or communities.

How can investors more effectively advocate for ethical AI?


By inquiring about their investments and potential futures, you can ask what it looks like for this AI system to function wonderfully and responsibly. Where might things get uncontrollable? What are the possible ramifications for individuals and the community? When building must cease or major changes must be made, how would we know and what would we do next? There's no one-size-fits-all solution, but investors can influence where their firms focus their attention and effort simply by raising the right questions and sending a clear message about responsibility.
 

Leave a Comment
logo-img Fintech Newz

All Rights Reserved © 2024 Fintech Newz