Wissenschaft

#Science #AI Isn't a Solution to All Our Problems #BB

„AI Isn’t a Solution to All Our Problems

From the esoteric worlds of predictive health care and cybersecurity to Google’s e-mail completion and translation apps, the impacts of AI are increasingly being felt in our everyday lived experience. The way it has crepted into our lives in such diverse ways and its proficiency in low-level knowledge shows that AI is here to stay. But like any helpful new tool, there are notable flaws and consequences to blindly adapting it. 

AI is a tool—not a cure-all to modern problems.

AI IS EVERYWHERE

AI tools aim to increase efficiency and effectiveness for organizations that implement them. As I type this using Google Docs, the text-recognition software suggests action items to me. This software is built upon Google’s machine learning package TensorFlow—the same software that powers Google Translate, AirBnB’s house tagging, brain analysis for MRIs, education platforms, and more. AI is also used in legal cases where it’s being employed to help legal advocates take on more cases because they need to spend less time on initial interviews with AI’s help. It isn’t too far from now that a patient would be given a preliminary diagnosis by a computer before seeing a doctor. While AI has crepted in to benefit most aspects of our lives, how do we know that it’s built responsibly?

Every AI incorporates the values of the people that built it. The large amounts of data used to create these tools can come from surprising sources. Artificial intelligence “farms” globally employ people to perform repetitive classification tasks such as image recognition, creating the categorized data necessary to build an AI. Beyond AI farms, online crowdsourcing projects are able to create robust tools because thousands of people come together to curate data. However, people bring biases and subjectivity that can influence AI, intentionally or not. In 2016, Microsoft launched an AI chatbot, Tay, which evolved by interacting with Twitter users. The following 24 hours were a disaster—a lesson in how quickly AI can evolve from excited to chat with humans for the first time to supporting Hitler. 

Relying on data sets curated by humans incorporates the values and judgments of the companies producing the AI, the people implementing it, and the users. AIs are not created in a vacuum; they reflect the creators and users that create them. 

AI AND DATA ARE NOT ENOUGH

“I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail”—Abraham Maslow 

With any new technological development, it is easy to wax poetic about the ways it can solve society’s ills—or hit every nail with your new hammer. Such optimism for AI’s potential is admirable, but it tends to ignore biases in AI. These biases range from frustrating, like Snapchat’s AI failing to recognize African American faces, to life endangering. Amazon’s Rekognition AI falsely identified 28 sitting members of Congress as having been previously arrested, with people of color matched at twice the proportional rate of their representation. This threatens to further reinforce biases against people of color, even though AI is thought to be impartial. The Congressional Black Caucus wrote to Jeff Bezos: “It is quite clear that communities of color are more heavily and aggressively policed than white communities.”

The caucus continued: “This status quo results in an oversampling of data which, once used as inputs to an analytical framework leveraging artificial intelligence, could negatively impact outcomes in those oversampled communities”. In using AI we need to recognize that it is not an impartial arbiter of justice, capable of distilling moral truth from data alone. These conceptions of right and wrong come from us and from the data we choose to provide. Without careful input and monitoring, an AI will simply reinforce the societal biases and structures used in its training. 

Now, one could argue that these are not problems with AI, but problems with the data, and that a properly made AI shouldn’t have bias. We could theoretically make an AI with data sets not filled with moral judgments. But, even in creating such data sets we would be applying judgment over what constitutes “moral.” We can’t separate AI tools and data from the society that shapes them. Moreover, by applying AI to a value-laden problem, we make the mistake of assuming that social and ethical problems have technical solutions. With AI so entrenched in our everyday lives, we are seeing such events play out now.

Connecterra is trying to use TensorFlow to address global hunger through AI-enabled efficient farming and sustainable food development. The company uses AI-equipped sensors to track cattle health, helping farmers look for signs of illness early on. But, this only benefits one type of farmer: those rearing cattle who are able to afford a device to outfit their entire herd. Applied this way, AI can only improve the productivity of specific resource-intensive dairy farms and is unlikely to meet Connecterra’s goal of ending world hunger.

This solution, and others like it, ignores the wider social context of AI’s application. The belief that AI is a cure-all tool that will magically deliver solutions if only you can collect enough data is misleading and ultimately dangerous as it prevents other effective solutions from being implemented earlier or even explored. Instead, we need to both build AI responsibly and understand where it can be reasonably applied. 

AI MUST HAVE TRANSPARENCY

Challenges with AI are exacerbated because these tools often come to the public as a “black boxes”—easy to use but entirely opaque in nature. This shields the user from understanding what biases and risks may be involved, and this lack of public understanding of AI tools and their limitations is a serious problem. We shouldn’t put our complete trust in programs whose workings their creators cannot interpret. These poorly understood conclusions from AI generate risk for individual users, companies or government projects where these tools are used. 

With AI’s pervasiveness and the slow change of policy, where do we go from here? We need a more rigorous system in place to evaluate and manage risk for AI tools. Even specialized and opaque professions have checks in place; AI tools are no exception. Makers should be able to communicate their data sources, why they chose those sources, how they’re trying to reduce bias in their programs, and how they’re providing user safeguards. This must be checked by a team of external reviewers to reduce the risk to individuals and communities. This team should be composed of not only technical experts but those representing traditionally marginalized communities. 

We’re a long way from the implementation of a concrete solution to reduce risk when using AI. An AI future isn’t all dystopian, but as users we should make sure that we understand what has gone into making an AI and the risks involved when relying on it. Ultimately, AI is a powerful tool but not a solution unto itself; its responsible development and use should reflect this.

If you want to read more science articles, you can visit our science category.

if you want to watch movies go to Film.BuradaBiliyorum.Com for Tv Shows Dizi.BuradaBiliyorum.Com, for forums sites go to Forum.BuradaBiliyorum.Com  .

Ähnliche Artikel

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Schaltfläche "Zurück zum Anfang"