Jump to content

Google and AI and where it is already at

Recommended Posts

Interesting articles from Bloomberg:

 

"Google Renounces AI Weapons; Will Still Work With Military

By

Mark Bergen

Updated on

CEO releases AI principles after Project Maven employee revolt

Ethics charter forbids surveillance in some situations

Google pledged not to use its powerful artificial intelligence for weapons, illegal surveillance and technologies that cause "overall harm." But the company said it will keep working with the military in other areas, giving its cloud business the chance to pursue future lucrative government deals.

Sundar Pichai, chief executive officer for Alphabet Inc.’s Google, released a set of principles on Thursday after a revolt by thousands of employees of the internet giant. The charter sets "concrete standards" for how Google will design its AI research, implement its software tools and steer clear of certain work, Pichai said in a blog post.

"How AI is developed and used will have a significant impact on society for many years to come," Pichai wrote. "As a leader in AI, we feel a special responsibility to get this right."

 

 

Some Google employees and outside critics cautiously welcomed the principles, although they voiced reservations, particularly about language that gives the company ample wiggle room in future decisions.

The seven principles were drawn up to quell concern over Google’s work on Project Maven, a Defense Department initiative to apply AI tools to drone footage. Staff protests forced Google to retreat from the contract last week. The company said on Thursday that if the principles had existed earlier, Google would not have bid for Project Maven.

Yet Google’s cloud-computing unit, where the company is investing heavily, wants to work with the government and the Department of Defense because they are spending billions of dollars on cloud services. The charter shows Google’s pursuit of these contracts will continue.

"While we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas," Pichai wrote. "These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe."

Google’s charter is a watershed moment for the company and AI as a field. Technology giants, like Google, have stretched far ahead in developing software and services that give machines more control over decisions. However, these capabilities are now spreading to more industries, such as automotive, health care and government sectors. A driving force behind the spread is the easier access to AI building blocks that Google, Amazon.com Inc. and Microsoft Corp. have provided through their cloud services.

 

AI advances are helping medical research and provide other benefits. But the use of the technology in other areas has sparked concern among lawmakers and advocacy groups. Civil liberties organizations recently called out Amazon for offering facial recognition tech to local police departments.

Microsoft CEO Satya Nadella proposed similar principles in 2016, without mentioning the military. “Microsoft may decide to forgo the pursuit of business proposals for numerous reasons, including the company’s commitment to upholding human rights," the company said in an emailed statement on Thursday. Amazon didn’t respond to a request for comment.

In Google’s new principles, the company pledges not to pursue AI applications for weapons and technologies that "gather or use information for surveillance," in violation of accepted human rights laws. The principles also state that the company will work to avoid "unjust impacts" in its AI algorithms by injecting racial, sexual or political bias into automated decision-making.

 

In addition to outside criticism, Google has faced a rare spate of objections from its own staff. More than 4,000 employees signed a petition calling for the cancellation of the Project Maven contract, citing Google’s history of avoiding military work and worries about autonomous weapons. Last week, cloud chief Diane Greene said Google would not renew the deal when it expires next year -- an unusual withdrawal from a business deal.

One Google employee who signed the petition said the principles don’t go far enough, but it’s good that the company has finally addressed the issue. The proposed limit on the use of AI for surveillance is positive, but the language was too cautious, the person said. Other employees described the internal reception as lukewarm. They asked not to be identified criticizing their employer.

The principles about surveillance were not specific enough, according to Peter Asaro, an associate professor at The New School who organized a letter from academics against Project Maven.

 

"The international norms surrounding espionage, cyberoperations, mass information surveillance, and even drone surveillance are all contested and debated in the international sphere," he said. "Ultimately, how the company enacts these principles is what will matter more than statements such as this."

Google’s principles state that the company won’t design or deploy AI for "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people."

Miles Brundage, an AI policy research fellow at the University of Oxford, said the statement’s focus on "injury to people" suggests Google AI could still be used for cyberattacks and autonomous weapons aimed at buildings and other non-human targets. "Bit vague in places," he wrote on Twitter. "But it’s a start."

The staff who opposed the Project Maven deal also noted, in an internal email on Friday, that they would look closely at the charter and weigh in. Google will integrate the principles into existing product-review processes and plans to set up an internal review board to enforce the guidelines this year, according to a person familiar with the company.

"While this is our chosen approach to AI development, we also understand that there is room for many voices in this conversation," Pichai wrote in the blog post. "And we will continue to share what we have learned about ways to improve AI technologies and practices."

— With assistance by Dina Bass, and Spencer Soper

(Updates with reaction to principles in fourth paragraph.)

 

Article 2:

"What Google's AI Principles Left Out

We're in a golden age for hollow corporate statements sold as high-minded ethical treatises.

By

Eric Newcomer

 

Google AI

We're in a golden age for hollow corporate statements sold as high-minded ethical treatises.

Last year, in the thick of its fake news scandal, Facebook released a 5,000-word*document outlining, well, I'm still not sure exactly. The letter attempted to pull the company out of its public opinion*black hole by posing probing questions, including the head-scratcher: "How do we help people build supportive communities that strengthen traditional institutions in a world where membership in these institutions is declining?" The answers were generally of the "build more Facebook" variety. It was a masterstroke in corporate pablum, though not so masterful that it saved the company from the onslaught of bad press.

Now, Google seems to be taking a page from the book of its Silicon Valley-rival.

 

Facing a revolt from some of its employees over a contract with the U.S. military, Google has released a lengthy set of principles regarding how it will ethically implement artificial intelligence. The document is a clear attempt to balance continuing to land government contracts, while squashing its staff rebellion. If the company truly planned to restrict or alter its behavior, Amazon, which hasn't seen the same open rebellion, would happily grab those government contracts.

But on the point of whether Google should use artificial intelligence to help the U.S. military kill people, the company is clear: Google will not pursue applications "whose principle purpose or implementation is to cause or directly facilitate injury to people." Not everyone (read: Michael Bloomberg) agrees with the company’s decision to abandon its work on the military's drone program, but at least on this point Google is explicit.

The rest of the company’s "principles” are peppered with lawyerly hedging and vague commitments. Parse this sentence with me: "As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides." In other words, Google is going to try to add up the good and the bad that might come from AI software, and act accordingly. The company has discovered utilitarianism. This is the level of sophistication of my high school philosophy class.

It doesn't get much better. Headers include, "avoid creating or reinforcing unfair bias," "be built and tested for safety," and "be accountable to people." The principles Google is committing to could generously be considered table stakes. Even when it comes to surveillance, it's not really clear what exactly the company is promising. Google says it won't pursue spying technology “violating internationally accepted norms.” But the Chinese government actively surveils its own citizens and Barack Obama allegedly approved tapping German Chancellor Angela Merkel's phone. Which norms will Google be adhering to exactly?

Google’s document does pledge to address some interesting problems presented by artificial intelligence. Machine learning algorithms can produce answers that are sound on a statistical level but that can't be explained. That can lead to difficult-to-root-out biases and inscrutable results from the machines that increasingly rule many aspects of our lives. If you're a doctor trying to tell a patient why the computer thinks they have a disease, it's nice to know why it thinks so. Google writes that it will "design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal." That is an admirable goal, though what feedback will be deemed “appropriate” is still unclear.

 

A crucial question will be who decides if Google has fulfilled its commitments.*Peter Eckersley, chief computer scientist at the Electronic Frontier Foundation, told*Gizmodo that he thought Google should commit to an independent review process. It’s a proposal that makes a lot of sense. As it becomes more important, big*tech needs to head in the direction of greater accountability. A few weeks ago, I wrote about Amazon Alexa. I made the point that if the government wants to build a new road by your house, it would hold a public hearing. If Amazon wants to turn your speaker into a listening device, you don't have any say as to how that's implemented. Who will be the first technology company to bring in an independent commission of philosophers to make sure it’s upholding its ethical commitments?

Without promising independent oversight, Google is just putting a new, less persuasive, spin on an old principle it’s tried to bury: Don't be evil.

This article also ran in Bloomberg Technology’s Fully Charged newsletter. Sign up*here."

 

now compare these snippets with this interesting video:

 

Share this post


Link to post
Share on other sites

The awful thing is, if the West doesn't do it, then other countries will (eventually) develop such AI capability.

Share this post


Link to post
Share on other sites

I am only wondering how far Google will come with incorporating AI into their technologies and how much this will have to do with personal data. I hate to think they have been using our phone calls under the pretext of improving voice search...

Share this post


Link to post
Share on other sites
The awful thing is, if the West doesn't do it, then other countries will (eventually) develop such AI capability.

 

AI has no allegiance but to its own 'survival' - think.

Share this post


Link to post
Share on other sites

AI is like God - there are plenty of atheists, and no-one else can really agree what it means, however clear and obvious their own ideas are…

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Ă—
Ă—
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.