page contents [email-subscribers-form id="{form-id}"]

Google AI will not be use for war.

Google AI is ever expanding, the company have invested heavily on it. The popular Google Assistant and it Duplex have received lots of great reviews.

Google Artificial Intelligence

Google AI will not be use for Military purpose.

Google have partnership with the US military and this have raised many eye brows. Many believed the US military will eventually use the Google advance AI technology for military purposes.

You May Like : Robots are now learning Cyber Attacks through video games. 

What is AI?

AI simply means artificial intelligence. Avoiding to Google, AI is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound. At Google, we use AI to make products more useful – from email that’s m – free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy.

After Google partnership with US military on machines that can learn and analyzed Drone footage, there a lot of protest within and outside of the corporation.

Google in a recent blog spot have stepped out to clear the air on that. The post titled “AI at Google: Our Principle” in there the Company promised that it will not use it AI tech for military purpose.

You May Also LikeWhat Is Google Duplex and The Controversy Surrounding It?

Why Google Responded quickly this time.

A decision to provide machine-learning tools to analyse drone footage caused some employees to resign.

Google told employees last week it would not renew its contract with the US Department of Defense when it expires next year.

It has now said it will not use AI for technology that causes injury to people.

The new guidelines for AI use were outlined in a blog post from chief executive Sundar Pichai.

He said the firm would not design AI for:

  • technologies that cause or are likely to cause overall harm
  • weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people
  • technology that gathers or uses information for surveillance violating internationally accepted norms
  • technologies whose purpose contravenes widely accepted principles of international law and human rights

He also laid out seven more principles which he said would guide the design of AI systems in future:

  • AI should be socially beneficial
  • It should avoid creating or reinforcing bias
  • Be built and tested for safety
  • Be accountable
  • Incorporate privacy design principles
  • Uphold high standards of scientific excellence
  • Be made available for use

When Google revealed that it had signed a contract to share its AI technology with the Pentagon, a number of employees resigned and thousands of others signed a protest petition.

Project Maven involves using machine learning to distinguish people and objects in drone videos.

The Electronic Frontier Foundation welcomed the change of heart, calling it a “big win for ethical AI principles”.

Read AlsoThe Next Cold War Be Powered by Artificial Intelligence

Other None Military Use of Google AI

In the blog post, Google went further to state how it AI is being use in a very society helpful ways.

“Beyond our products, we’re using AI to help people tackle urgent problems.

A pair of high school students are building AI-powered sensors to predict the risk of wildfires.

Farmers are using it to monitor the health of their herds.

Doctors are starting to use AI to help diagnose cancer and prevent blindness.

These clear benefits are why Google invests heavily in AI research and development, and makes AI technologies widely available to others via our tools and open-source code”.

My take is the future will surely be dominated by AI techs , it is important to ensure a more ethical used of it.

Leave a Reply