Women in AI: Miriam Vogel emphasizes the need for responsible AI

To give AI-focused female academics and others their well-deserved (and long overdue) time in the spotlight, JS has published a series of interviews focusing on notable women who have contributed to the AI ​​revolution. We'll publish these pieces throughout the year as the AI ​​boom continues, highlighting important work that often goes unrecognized. Read more profiles here.

Miriam Vogel is the CEO of EqualAI, a nonprofit founded to reduce unconscious bias in AI and promote responsible AI governance. She also chairs the recently created National AI Advisory Committee, which is mandated by Congress to advise President Joe Biden and the White House on AI policy, and teaches technology law and policy at the Georgetown University Law Center.

Vogel previously served as Principal Deputy Assistant Attorney General at the Department of Justice, where he advised the Attorney General and Deputy Attorney General on a wide range of legal, policy and operational issues. As a board member at the Responsible AI Institute and senior advisor to the Center for Democracy and Technology, Vogel's advised White House leadership on initiatives ranging from women's, economic, regulatory and food safety policies to criminal justice issues.

In short, how did you get started with AI? What attracted you to the field?

I started my career in government, initially as a Senate intern, the summer before 11th grade. I caught the policy bug and spent the next few summers working on the Hill and then in the White House. My focus at the time was on civil rights, which is not the conventional path to artificial intelligence, but looking back it makes perfect sense.

After law school, my career evolved from entertainment attorney specializing in intellectual property to work in civil rights and social impact in the executive branch. I had the privilege of leading the Equal Pay Task Force while serving in the White House, and while serving as Deputy Attorney General under former Deputy Attorney General Sally Yates, I led the creation and development of implicit bias training for federal law enforcement. .

I was asked to lead EqualAI based on my experience as a technology industry lawyer and my background in policies that address bias and systemic harm. I was drawn to this organization because I realized that AI was the next frontier in civil rights. Without vigilance, decades of progress in lines of code could be undone.

I've always been excited by the possibilities that innovation offers, and I still believe AI can provide amazing new opportunities for more populations to thrive – but only if we are careful at this critical moment to ensure more people are connected to meaningful can participate in its development in a way. creation and development.

How do you address the challenges of the male-dominated technology industry, and, by extension, the male-dominated AI industry?

I fundamentally believe that we all have a role to play in ensuring our AI is as effective, efficient and useful as possible. That means ensuring we do more to support women's voices in development (who, by the way, account for more than 85% of purchases in the US, thus ensuring their interests and safety are integrated is a smart business move), as well as the voices of other underrepresented populations of different ages, regions, ethnicities and nationalities who do not participate sufficiently.

As we strive for gender equality, we must ensure that more voices and perspectives are considered to develop AI that works for all consumers – not just AI that works for the developers.

What advice would you give to women looking to enter the AI ​​field?

First, it's never too late to start. Never. I encourage all grandparents to use OpenAI's ChatGPT, Microsoft's Copilot, or Google's Gemini. We will all need to become AI literate to thrive in what will become an AI-powered economy. And that's exciting! We all have a role to play. Whether you're starting a career in AI or using AI to support your work, women should try AI tools, see what these tools can and can't do, see if they work for them, and generally become AI-savvy.

Second, the responsible development of AI requires more than just ethical computer scientists. Many people think that the AI ​​field requires a computer science or other STEM degree, when in reality AI requires perspectives and expertise from women and men of all backgrounds. Get in! Your voice and perspective are needed. Your involvement is crucial.

What are some of the most pressing issues facing AI as it continues to evolve?

First, we need more AI literacy. We are “AI net positive” at EqualAI, meaning we believe AI will provide unprecedented opportunities for our economy and improve our daily lives – but only if these opportunities are equally available and beneficial to a larger cross-section of our population. We need our current workforce, the next generation, our grandparents – all of us — be equipped with the knowledge and skills to benefit from AI.

Second, we need to develop standardized measures and metrics to evaluate AI systems. Standardized assessments will be critical to building trust in our AI systems and enabling consumers, regulators and downstream users to understand the limitations of the AI ​​systems they interact with and whether that system trusts us is worth. Understanding who a system is built for and what the intended use cases are helps us answer the most important question: who could this fail for?

What issues should AI users be aware of?

Artificial intelligence is just that: artificial. It is built by humans to 'mimic' human cognition and empower people in their pursuits. We must maintain the appropriate level of skepticism and exercise due diligence when using this technology to ensure that we place trust in systems that deserve our trust. AI can augment humanity, but not replace it.

We must remain clear on the fact that AI consists of two main ingredients: algorithms (human-made) and data (which mirror human conversations and interactions). As a result, AI reflects and adapts to our human shortcomings. Bias and harm can occur throughout the life cycle of AI, either from the algorithms written by humans or from the data that is a snapshot of human lives. However, every human touchpoint is an opportunity to identify and limit potential harm.

Because people can only imagine as broadly as their own experience allows and AI programs are limited by the constructs under which they are built, the more people with different perspectives and experiences are on a team, they are more likely to notice biases and other safety issues to sit. embedded in their AI.

What's the best way to build AI responsibly?

Building AI worthy of our trust is entirely our responsibility. We can't expect someone else to do it for us. We need to start by asking three basic questions: (1) Who was this AI system built for (2), what were the intended use cases, and (3) who could this fail for? Even with these questions in mind, there will inevitably be pitfalls. To mitigate these risks, designers, developers, and implementers must follow best practices.

At EqualAI, we promote good “AI hygiene,” which means planning and accounting for your framework, standardizing testing, documentation, and routine audits. We also recently published a guide to designing and operationalizing a responsible AI governance framework, which describes the values, principles, and framework for responsibly implementing AI in an organization. The document serves as a resource for organizations of any size, sector or maturity that are adopting, developing, using and deploying AI systems, with an internal and public commitment to doing so responsibly.

How can investors better push for responsible AI?

Investors play an outsized role in ensuring our AI is safe, effective, and responsible. Investors can ensure that the companies seeking funding are aware of and thoughtful about mitigating potential damages and liabilities in their AI systems. Even asking the question, “How did you implement AI governance practices?” is a meaningful first step towards better results.

This effort is not only good for the common good; it is also in the best interests of investors who want to ensure that the companies they invest in and are associated with are not associated with bad headlines or hampered by lawsuits. Trust is one of the few non-negotiables for a company's success, and a commitment to responsible AI governance is the best way to build and maintain public trust. Robust and reliable AI makes good business sense.

Related Posts

As 'zombie deer disease' spreads, scientists search for answers

This story originally appeared on Yale Environment 360. Late last year, federal officials discovered the carcass of a mule deer near Yellowstone Lake in a remote area of ​​Yellowstone National…

Britain is investigating HPE's planned $14 billion takeover of Juniper Networks

The British Competition and Markets Authority (CMA) has done this initiated a formal “Phase 1” investigation into the planned acquisition of Juniper Networks by Hewlett Packard Enterprise (HPE). The CMA…

Leave a Reply

Your email address will not be published. Required fields are marked *

You Missed

What is open and closed on Juneteenth? See which shops and restaurants are open today.

  • June 19, 2024
What is open and closed on Juneteenth?  See which shops and restaurants are open today.

shares, news and UK inflation data

  • June 19, 2024
shares, news and UK inflation data

10 ways BPD services can effectively manage symptoms and improve quality of life

  • June 19, 2024
10 ways BPD services can effectively manage symptoms and improve quality of life

How grounding came into existence and its modern applications

  • June 19, 2024
How grounding came into existence and its modern applications

As 'zombie deer disease' spreads, scientists search for answers

  • June 19, 2024
As 'zombie deer disease' spreads, scientists search for answers

Here are Wall Street's favorite picks in the S&P 500 for the second half of the year

  • June 19, 2024
Here are Wall Street's favorite picks in the S&P 500 for the second half of the year

Here is the complete list of hurricane names for the 2024 season

  • June 19, 2024
Here is the complete list of hurricane names for the 2024 season

How to protect yourself from a passive-aggressive partner

  • June 19, 2024
How to protect yourself from a passive-aggressive partner

Large wildfires create weather that promotes more fire

  • June 19, 2024
Large wildfires create weather that promotes more fire

Britain is investigating HPE's planned $14 billion takeover of Juniper Networks

  • June 19, 2024
Britain is investigating HPE's planned $14 billion takeover of Juniper Networks

Nvidia overnight rally lifts chip-related stocks in Asia on AI optimism

  • June 19, 2024
Nvidia overnight rally lifts chip-related stocks in Asia on AI optimism