Google publishes its “artificial intelligence laws”: the giant renounces its war applications

Among the technological advances we have seen in recent years, artificial intelligence has been one of the most disruptive since it has the ability to completely change our habits, customs and . Although artificial intelligence has been designed to provide greater comfort and make life easier for users, could it become a threat to society? was recently at the center of the controversy over this fact and has published his laws of artificial intelligence, with which it intends to govern the use of this technology in its future developments.

The controversy of Project Maven

Apparently, Google’s seven laws of artificial intelligence arise after the great controversy and concern that was unleashed about the work of the internet giant in , a program of the United States Army to apply artificial intelligence in improving the navigation systems of military drones.

In April, when this project became known, more than 4,000 Google employees addressed to Google CEO Sundar Pichai, requesting that the company withdraw from the project. After receiving no response, twelve of them they resigned from their posts.

Now, Sundar Pichai has decided on a letter in which, in addition to outright rule out the possible application of the artificial intelligence of the internet giant for war purposes, also lists the laws of artificial intelligence by which it will be governed harnessing the potential of this technology. Something like the , but in a Google version.

Google’s seven laws of artificial intelligence

Pichai is blunt in stating that the seven laws of artificial intelligence are not just theoretical concepts, but concrete standards that “actively govern our product research and development, and influence our business decisions”.

See also  The emoji keyboard appears to promote a new form of communication

So from now on, Google promises that its next AI projects will be will be evaluated and based on the following objectives:

1 . be socially beneficial

Google will seek that the advances in Artificial Intelligence that it develops manage to transform society in a beneficial way in industries such as health care, security, energy, transportation, manufacturing, and entertainment, while also committing to respect cultural, social, and legal norms in the countries where it operates.

two . Avoid creating or reinforcing unfair biases in your algorithms

Google assures that it will work focusing on avoiding unfair impacts on your algorithmsparticularly those related to sensitive characteristics such as race, ethnicity, gender, national origin, income, sexual orientation, ability, and political or religious beliefs.

3 . Be built and tested to be safe

Google will implement strict privacy practices protection and security so that the tests of Artificial Intelligence technologies do not cause damage, and can even be carried out in controlled environments, in addition to supervising their operation after their implementation.

Four . Being responsible with people

Google’s artificial intelligence systems always will have control and direction of responsible humans behind her.

5 . respect privacy

Google wants to assure users that its artificial intelligence will respect the privacy of the data obtainedand notes that users will receive notices and consent forms.

6 . Maintain high standards of scientific excellence

Google is committed to share and collaborate with artificial intelligence knowledge through the publication of educational material, research best practices that enable more people to develop useful AI applications as well as to conduct this open research through strict intellectual rigor and integrity.

See also  5 digital marketing actions that you can boost thanks to your feeds - Marketing 4 Ecommerce - Your online marketing magazine for e-commerce

7 . Be available for uses that are in accordance with these principles

Google’s artificial intelligence will limit potentially harmful or abusive applicationsand will evaluate its use based on several factors: primary purpose and use, nature and uniqueness, scale, nature of Google’s involvement.

A final point that Pichai highlighted in his statement is that although Google is not developing artificial intelligence in order to create weapons, yes, it will continue to work together with governments and the military in many more areasincluding cyber security, training, military recruiting or veterans healthcare, as well as search and rescue.

In this way and as Isaac Asimov predicted in his science fiction novelsthe development of artificial intelligence technologies should be regulated based on ethical principles, but… will these laws be enough?

Image:

Stay informed of the most relevant news on our news channel

Loading Facebook Comments ...
Loading Disqus Comments ...