Does business have a future?
AI is a mysterious field that a lot of people aren’t very familiar with, even though we see it around us every day. The facial recognition on your phone, autocorrect and email spam filters, these are all computers that learn through data. I think the reason we don’t always realise that these everyday things are actually AI, is that when talking about Artificial intelligence, we often talk of something that doesn’t still quite exist. What we do have is something called narrow AI, computers that are designed to perform a very specific task and become better and more efficient in it very fast. Chess engines are a good example of AI that far exceeds human capability in the task it has been given, but it’s still only good for that one thing. Artificial general intelligence (AGI), also referred to as strong AI on the other hand is a concept where a computer could mimic human intelligence and apply what it learns to solve any given problem. Strong Ai is where the big ethical questions really come in and that is why a lot of people believe that we should discuss and make decisions on them before we actually get there.
Climate change is often seen as the biggest problem in our civilizations future and something that involves a lot of ethical questions. Artificial intelligence could help us solve it but only if it is really used for the right purposes. A computer could be designed to tell us how to make all kinds of production more sustainable and make the most of all resources. The reality of the situation is sadly different. Marketing algorithms are a huge field in AI, and they are only designed to make us consume more. Is it ethical to use these algorithms to grow your business and support yourself if that means contributing to unnecessary consumption?
Discussing ethical rules for AI can be complicated for many reasons, for example just the time span in question can change the way we view ethics. Author Maija-Riitta Ollila talks about this in her book “tekoälyn etiikka” about ethics of AI. Ollila gives an example of two companies, one that wants to make a better result for the next quarter and one that wants to preserve life on earth for as long as possible. These two will likely view ethical and moral questions very differently. The question of autonomous weapons can sound simple since it is easy to say that killing is unethical, but then in real life we can see that not everyone agrees with that statement. If we ban the development of autonomous weapons, can we still ensure that we would be able to defend against one if needed.
Autonomous AI weapons alone are a big topic. Artificial intelligence is used in weapons already but having a “human in the loop” is required. This means that a human operator is responsible for any final decision to engage on a target. Also, we must remember that these machines only use narrow AI, so they can only perform that specific task they were designed to do. What if there soon is a possibility of a strong AI autonomous weapon? When a machine can make decisions about life and death, who is responsible for any possible mistakes? From a lot of people, the answer is that we should not ever give that power to a computer. We hold a machine like this to an exceedingly high standard and tend to think that if it could have any weaknesses or make mistakes it should not be used. In thinking like this it is like people are trying to belittle their own flaws. Not only do people make mistakes in combat, but soldiers can also do unethical things outside of battle too. All kinds of violence against civilians, theft and other crimes could be minimized when there are less soldiers.
When researching and thinking about the future of business through development of AI, I see two vastly different sides of it. On one hand we can discuss the endless possibilities in maximizing profit with advertising algorithms and other directed marketing. When we go a step further and think of a world where machines can do most actions and provide services better and more efficiently than any human, the world of business changes rapidly. When machines do all the work what really is the value of that work? Who gets to profit and have access to those services? A popular solution proposal to a situation where most people do not have jobs, is having some universal basic income. This would lead to everything being a lot cheaper and so more accessible to everyone. Fair distribution of wealth could be something that helps create a more sustainable future and better environment to live in for everyone. Another question is will we be able to even agree on what fair is or the kind of future we want to aim for. Even when it’s not possible to make all people comply to the same set of ethical rules, establishing a set of general rules most of the world can agree with is important in taking control and responsibility of the world we create with AI.