The announcement follows protests by thousands of the search engine's employees who objected to the company's collaboration with the United States military on identifying objects in drone videos, Reuters reported.
A Google official said in a statement to Reuters that the company would not have pursued the drone project had the policies been in place a year ago. Though other tech companies haven't faced the same level of criticism over military contracts, Google's move could pressure other companies to make similar commitments.
He added that the principles also called for AI applications to be "built and tested for safety", to be "accountable to people" and to "incorporate privacy design principles".
Pichai said Google will design AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research. Only when the benefits "substantially outweigh the risks" will the company proceed, but with safeguards in place. Google rather will look for government contracts in regions, for example, cybersecurity, military recruitment and search and rescue Chief Executive Sundar Pichai said in a blog entry on Thursday.
Only weapons that have a "principal purpose" of causing injury will be avoided, but it's unclear which weapons that refers to. Most companies would release themselves of such responsibility, but Google is asking itself to assess the primary goal and likely use, the uniqueness, scale and the nature of its own involvement. Representative Pete King, a New York Republican, tweeted on Thursday that Google not seeking to extend the drone deal "is a defeat for U.S. national security".
Kim Jong Un lookalike questioned in Singapore before summit
He has not shied from using flattery either - he was the first to say Mr Trump could be nominated for a Nobel Peace Prize. Kim has the "opportunity" to make a deal with the U.S., Trump said Saturday, but he "won't have that opportunity again".
He also said Google would not pursue AI in "technologies that gather or use information for surveillance violating internationally accepted norms" and "technologies whose goal contravenes widely accepted principles of worldwide law and human rights".
Google says it will not pursue any AI research that involves weapons or any other system that exists primarily to harm people. In the final line, it now says: "And remember... don't be evil, and if you see something that you think isn't right - speak up!"
Google's Project Maven with the US Defence Department came under fire from company employees concerned about the direction it was taking the company.
In Google's new principles, the company pledges not to pursue AI applications for weapons and technologies that "gather or use information for surveillance", in violation of accepted human rights laws.
A recent report claimed that Google won't be renewing its Project Maven contract next year due to the outcry, though leaked emails reportedly revealed that Google's higher ups were eager for such contracts.