5 Concerns Surrounding the Scaling and Adoption of AI

ODSC - Open Data Science
7 min readAug 3, 2023

Over the years, there has been a growing concern about AI’s impact on society. But, this has only now grown since the introduction of the popular Chatbot, ChatGPT. Both the imaginations of the public and seasoned data professionals have run wild. Many ask quite similar questions when it comes to AI. So one has to wonder, what are the concerns, and how can we address them in the data science field and beyond?

In the following sections, we’ll take a look at a few of these concerns related to AI, and even some ways to address them.

Concern #1: Bias and Fairness in AI

Fairness and bias aren’t just an isolated issue with artificial intelligence, but one that is felt all over the field. But, due to how AI algorithms are trained using vast amounts of data, if that data contains biases, the AI system can inherit and perpetuate those biases even if the programmers aren’t hoping to do so. A few examples of this can be seen with AI programs in different fields.

AI is entering the HR field and fast becoming a part of the overall hiring process. If the model is biased, it could mean that qualified applicants could miss out on career opportunities. This could be seen with some applicant systems requiring keywords to screen potential candidates. If it’s not trained correctly or lacks context that is important to some fields, it could exclude entire groups of people without just cause.

As many data scientists know, the value of data comes and the fundamentals of data collection and curation. So to address this particular concern there are a few things that can be done. First, ensuring that the training data is diverse, representative, and free from biased assumptions is crucial. Then, of course, you have to make sure the data is clean and taken care of. Finally, there needs to be continuous monitoring of the AI systems. This can be done during deployment and adjustments can be made as needed.

Concern #2: Privacy and Data Security

Another major concern that has captured the public has to do with data privacy and security.

It’s well-known that models and other AI-powered systems often rely on vast amounts of personal data to make accurate predictions and decisions. This could come from web scraping, and even other sources of public data offline. But with all that data, and often tied to individuals in some form, the collection, storage, and use of such data have raised significant privacy concerns.

Though for many, the first thing that might come to mind is data breaches, most often it’s the misuse of data that tends to be of higher risk. That’s because, with data misuse & manipulation, you can discriminate against a group of people, or apply a bias in an AI model. This doesn’t even touch on the micro level where enough personal data could potentially be used to steal identities. Consider the advancements in deepfake technology. A bit of your text and voice data could be used to create a conversation with AI that could fool even those closest to you.

So how can this be addressed? Where there are a few things that can be done. First, the absence of any regulatory compliance requirements, data ownership, and consent is normally the first barrier. If individuals have control over their data and provide explicit consent, it puts each person in control, making it more difficult to farm data to cause harm. On the collection side of things, there is data minimization. In short, only connecting the necessary data to train a model while avoiding unnecessary personal information that could put a person’s privacy at risk.

Finally, for those in cyber security, proper security measures and strong encryption. Ensuring that their robust encryption layout with proper security protocols to protect data from unauthorized access, the likelihood of people’s data being at risk is greatly reduced.

Concern #3: Unemployment and Job Displacement

This is the elephant in the room and the one that needs a bit more attention. Though many in data science are rightfully excited at the prospects of AI, such as how it can be a new tool to help humanity as a whole, there are growing concerns that the increased adoption of AI and automation could lead to long-term job displacement. This is particularly concerning in positions that have repetitive and routine tasks that AI is often best suited to take on.

And this isn’t a fear of just one economic class of workers. AI, and with it robotics, has the potential to shake the foundations of the labor market. Both white-collar and blue-collar positions, in different ways, could see major changes in the coming years. This has caused many to wonder how this concern should be addressed when it comes to the workforce.

Well, one way of course would be to incentivize reskilling and skilling portions of the labor market who are most likely to be affected by AI. This allows workers to take control of their career destiny and adapt to the ever-changing nature of the job market. Another part of the solution would be collaboration and AI, or human/AI integration. Earlier this year, Microsoft CEO Satya Nadella, said that he believed AI wouldn’t replace the worker, but instead enhance them.

So finding ways to encourage businesses to see the benefits of AI as part of the workplace will be beneficial, not as a means of replacing employees. But some believe that this won’t be enough. As AI continues to rapidly scale and expand in the market, many will be left without the flexibility required to upskill/reskill or find another position that values human/AI integration. Chief among those worried is DeepMind co-founder, Mustafa Suleyman, who said that part of the solution should be a universal basic income, or UBI.

Though there is no silver bullet to address this concern, it’s one that has many the most worried as economic disruptions aren’t just numbers on paper. These are people with families and homes, all trying to find ways to make it.

Concern #4: Autonomous Weapons and Military Use of AI

If there is one thing about human nature we can count on, it is at some point two groups will clash in a war. Though popularized by science fiction films such as Terminator, autonomous weapons aren’t quite there yet. But the real issue lies with the ethical dilemma of removing the human element from the battlefield. Allowing AI to control the field with little or no human input could open the door for nation-states to be less concerned about the consequences of military action as human personnel are far from the front.

This could also increase the lethality, violence, and brutality of warfare. Finally, an uncontrollable AI is also likely to, at some point, go wrong and create a worse situation on the front.

So how could this concern be addressed?

First, the private sector has already begun stepping in. Last year, major robotics firms such as Boston Dynamic, signed a pledge to not advance AI-powered robotics technology for warfare. Though this agreement is a great first step, it isn’t legally binding and only accounts for major firms, not startups. Another way to address this is through international agreements. Much like how chemical and biological weapons were prohibited after the first world war, nations can come together and agree not to remove the human element.

But because removing the human element could be politically beneficial for most nation-states, the likelihood of this occurring isn’t high. Finally, much like anti-AI bias frameworks, ethical frameworks could be established that create clear guidelines and principles for the use of AI in military applications.

Concern #5: Superintelligence and Existential Risk

Finally, the last concern on the list, and it’s by far the most interesting and horrifying at the same time. Skynet, HAL-9000, and the machine civilization from The Matrix films are examples of AI in popular cinema, and how superintelligence went wrong. But not only that, its existence represents a fundamental and existential risk to humanity. So it’s no wonder that the fear of a superintelligent AI has only grown, and even some experts have sounded the alarm that superintelligent AI, though still likely some time away, could cause real harm to the human race.

So other than smashing all the computers and returning to the industrial revolution, what are some ways that AI superintelligence can be addressed?

First, developing AI with ethical frameworks is critical. If AI is left blind, without guiding principles of human values, things can go wrong fast. This involves adjusting for bias, and other anonymity in the models that could generate responses or behaviors that are risky. Then, there is AI safety research. The goal of this is to invest capital, human labor, and time in order to anticipate and mitigate potential risks and vulnerabilities. It’s still a fresh subfield in artificial intelligence, but by working out principles such as these, the risk of an AI superintelligence going wrong could be greatly reduced.

Conclusion

Pretty interesting right? It’s clear that AI is here to stay, and will continue to scale, so the need for responsible AI will grow with it. That’s why at ODSC West, responsible AI will have its own track. Come to ODSC West this October 30th to November 2nd, and meet with the experts leading the field, what they have to say, and what a world with responsible AI could look like.

Originally posted on OpenDataScience.com

Read more data science articles on OpenDataScience.com, including tutorials and guides from beginner to advanced levels! Subscribe to our weekly newsletter here and receive the latest news every Thursday. You can also get data science training on-demand wherever you are with our Ai+ Training platform. Subscribe to our fast-growing Medium Publication too, the ODSC Journal, and inquire about becoming a writer.

--

--

ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.