Our website use cookies to improve and personalize your experience and to display advertisements(if any). Our website may also include cookies from third parties like Google Adsense, Google Analytics, Youtube. By using the website, you consent to the use of cookies. We have updated our Privacy Policy. Please click on the button to check our Privacy Policy.

What Is the “Alignment Problem” in A.I.? What Business Owners Need to Know

Anyone who has ever had a dog understands the frustration of trying to get the animal to do even the simplest task. 

You hold out the dry crumble-prone bone-shaped treat, entreating the furry four-legged friend to “sit, siiiiiiiiittttttt, c’mon just sit [Pet Name] do you want the treat or not?” and only getting a wagging tail and panting loose-tongued dog smile in return. 

Or, you throw the tennis ball far in the dog park, and your pet runs after it. When the pet returns, it doesn’t hold a tennis ball in its slobbering mouth, but rather a pinecone.

Sometimes, a version of this pet-related issue happens in artificial intelligence. It is known as the “alignment problem“. 

That basically refers to the problem of getting A.I. to do the tasks and solves the problems that its designers made the A.I. for. 

In other words, to have the A.I. understand what it needs to do so that it does not unintentionally go rogue is a big part of the A.I. industry

Why the Alignment Problem Is So Important 

Many researchers are concerned with the alignment problem because A.I. is often doing tasks of much more import and consequence than merely fetching a tennis ball. 

Unless, of course, it is an automated ball-fetcher on a tennis court. Even then, you want to ensure that the automated ball-fetcher aligns well enough to avoid throwing tennis balls at unsuspecting members of the crowd.

Businesses are using A.I. to screen applicants for loans.

The challenge here is to align A.I. with the goal of being equitable in judging who does and does not get a loan. 

Of course, lenders are using A.I. in the first place to cut costs and speed up operations and efficiency. The motivation to make sure that A.I. is aligned is in everyone’s favor, because the use of A.I. that could possibly discriminate against certain groups may lead to lawsuits for that company. 

Businesses and the Alignment Problem

It is easy to see then how all business owners ought to be concerned with the alignment problem when it comes to artificial intelligence. 

Of course, business owners in general are concerned with the alignment problem when it comes to human workers as well. Do you think the upper management at a certain fast food restaurant was all that happy when it came out that a certain location of said restaurant had 10 year-olds working there until the wee hours of the morning? 

Just as they expect their employees not to use child labor to get the job done, a business owner expects A.I. to do the right thing in performing its job.

Much of the time, this will involve human oversight and treating A.I. as an assistant rather than a miracle-working power tool that you can just leave alone to do its work. 

This is because of the very real possibility of social misalignment, which tech leaders have acknowledged as a potential danger of A.I. 

OpenAI’s Sam Altman Expresses His Fear of “Social Misalignments” of A.I. 

Human societies have certain aspirations and needs that are key to their functioning. Or, at least, their self-conception and -perception. 

OpenAI’s Sam Altman expressed his concern for social misalignments caused by A.I. 

He was actually quite vague in articulating how things could go wrong, but it is easy to imagine for ourselves how this could happen. 

But something he did stress was that it would be unintentional, just a result of the A.I. doing its job without really realizing that it is messing up a more precise goal that the developers have set it to accomplish. 

An Example of Social Misalignment

To bring back the application-screening A.I. example, the misalignment of such platforms could lead to broader social misalignments. 

In societies where certain racial groups face discrimination and consequently experience more economic disadvantages, an A.I. application-screening platform could exacerbate this discrimination.

Here is how: 

The A.I. could be predisposed to deny loan applications that come from people from an economically disadvantaged segment of the population–whose disadvantage comes from widespread structural issues that lead to discrimination–, simply because they have less money than the applicants from more privileged backgrounds. 

No matter that their credit history shows that they can still meet payments, or that they still have enough money in the bank to stay ahead on payments. By purely having less money overall than the more privileged segment, these applications get denied. 

This highlights how A.I. is socially misaligned because it contradicts societal imperatives to combat discrimination by acknowledging that many people in society simply face a disadvantage from birth, and decision-makers should consider this factor in such decisions.

This, then, is why human oversight can help mitigate social misalignments of A.I., as a human can potentially recognize bias in an A.I. system, and act accordingly in overriding the A.I.’s socially misaligned decision. 

Related Posts