Do Asimov’s Laws of Robotics Still Apply?

Date:

Share post:

The Oxford English Dictionary believes that the first known use of the word “robotics” was in a shorty story called, “Liar!”, written by Isaac Asimov. Asimov was a wildly creative writer, crafting in the realms of fiction, non-fiction, science fiction, and poetry. His most notable work, however, was contained in another short story called, “Runaround.”

Asimov’s Three Laws of Robotics were geared toward a scenario that featured humanoid robots (androids) that would serve at the pleasure of their human owners. Some of his thoughts are explored in a popular set of short stories turned film called I, Robot.

The Three Laws were created to provide guidelines for these robots that would be presumably interacting with humans. In the three-quarters of a century since his first mentioning of the laws, some things have changed about robots that might indicate a change in the laws is necessary.

While Asimov clearly had no shortage of imagination, it is hard to believe he would have been thinking of Roombas, drones capable of military action, and robotic arms that fulfill roles in warehouses. Developments in robotics are happening rapidly and seemingly exponentially. Our current interactions are already different than those depicted by Asimov and the future looks like it may hold robotic capabilities previously unimagined.

The advancing capabilities of robots coupled with AI and machine learning lead to the common belief that the Three Laws of Robotics may not be comprehensive enough. It may be time to update these laws to reflect a more current view of robotics and provide flexibility for the changes that are sure to come.

Asimov’s Three Laws of Robotics

The laws were created by Isaac Asimov in defense of the human race. His vision of human-like robots depicted a world of interaction and potential harm that must be mitigated with appropriate rules. The laws, as you can read below are certainly reasonable. Issues arise as we examine them juxtaposed with modern robotics.

Law #1: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Law #2: A robot must obey the orders given it by human beings except where such orders would conflict with the first law.

Law #3: A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

The first issue we encounter with the three laws is the problem with the diversity of robots today compared to the visions of android servants. Some modern robots are extremely simple, following one or two commands to accomplish their tasks (iRobot or Roomba vacuums, for example). Do we really need a set of laws for these types of robots, at all? It seems excessive to consider morality and choice when a Roomba simply finds its way through rooms while vacuuming.

Alternatively, we are seeing increased use of robotics in military settings around the world. The term “drone strike” is now a common utterance when discussing any war theatre. Most robots used by the military are currently deployed for human-saving measures like defusing a bomb. In this case, Asimov’s laws would apply.

What of the robots that are on the cusp of being used as weapons in more traditional battle? This setting would make it entirely impossible for robots to follow the first, and subsequently all, of Asimov’s laws. Sure, a military robot might be saving lives in a way while taking another, but this would still run counter to the original intent of the laws.

This scenario (the military deployment of robots) alone is enough to inspire a careful consideration of the appropriateness of Asimov’s Laws of Robotics. Unfortunately, the laws are so ambiguous, they leave far too much room for individual interpretation. The laws skip the important step of defining their subjects: robots.

As mentioned earlier, Asimov’s view of “future robots” was much different than what we see coming out of robotics laboratories today. This has led to significant difficulty in determining how to apply the laws. The intricacies of robotics are so complicated and far beyond what Isaac Asimov could have even imagined.

The development of so-called robots for the correction of genetic health disorders throws a kink in the strict adherence to the laws. If one of these robots essentially becomes part of the human they are deployed in, are the actions of the human governed by the laws? How will we separate the human and the robot? If the robot is following the direction of human DNA, is it responsible to follow the laws?

Also undefined, is what to consider “being harmed.” Does emotional well-being count? If a robot engages in a relationship that causes a human to love it and then hurts the human emotionally, does that count as harm under the law? What if robotic AI or machine learning causes it to play a song that brings back negative emotions for a human? Is this harm? The sheer volume of questions leads to the understanding that it is time to consider improvement to Asimov’s Three Laws of Robotics.

Following the Laws is Currently Impossible

Asimov’s creativity seems to have led him to think way too far ahead while skipping many logical steps. As we discussed, many issues with his laws arise from the current glut of simplistic robots that either should not have to bother learning the laws or do not have the intelligence to do so.

While AI is rapidly advancing, it is still a challenge for robots to follow complicated commands with multiple “thought processes” involved. In order to adhere to the Three Laws, a robot would require AI advanced enough to determine an issue with its behavior conflicting with one of the laws and find a way to modify said behavior. AI and the power required to run a decision-making computation of this magnitude do not currently exist.

Isaac Asimov was certainly on the right track when he considered the problems robotics could present to the future world. However, the Three Laws of Robotics he developed seem to like the comprehensiveness required by robotics today and moving forward. As things currently stand, following the three laws is nearly impossible for robots. Click here if you feel like choosing the right electric motor is just as impossible.

While the Three Laws are imperfect and will need significant modification, the original thoughts of Asimov should be considered as developers move forward with advanced AI and robotics. The Three Laws undoubtedly create a framework to build upon as we enter the true robotic age.

spot_img

Related articles

The Importance of Eat and Run Verification Companies

As the digital landscape expands, online scams are becoming more prevalent, targeting unsuspecting users. Millions fall victim to...

Embracing a Sustainable Future: The Critical Intersection of Environment and Sustainability

In an era where the health of our planet is under increasing threat, the intersection of environment and...

Unveiling the Power of 5G Technology: Transforming Connectivity and Beyond

As the world races into the future, the advent of 5G technology stands at the forefront, promising a...

How to Make Sumptuous All-Natural Hot Cocoa

There's nothing quite like a cup of hot cocoa to warm your soul on a chilly day. While...