Pichai, who as appointed Alphabet CEO in December after leading Google for four years, is just the latest prominent industry figure to call for AI regulations. The Trump administration is proposing new rules guiding the way the US government regulates the use of artificial intelligence in medicine, transportation, and other industries.
Of course, artificial intelligence is central to Alphabet's business model.
Pichai said how "history is full of examples of how technology's virtues aren't guaranteed", and cites the internal combustion engine as a technology which brought travel to the masses, but also caused accidents.
"It can be immediate but maybe there's a waiting period before we really think about how it's being used", Pichai said. "The internet made it possible to connect with everyone and get information from anywhere, but it was also easier to spread the wrong information".
In 2018, Google pledged not to use AI in weapons-related applications that violate worldwide standards or that violate human rights. To get there, we need agreement on core values. To this end, he points to Google's own public guidelines for accountable AI usage, as well as the company's continued commitments to help "navigate these issues together".
Pichai urged regulators to take a "proportionate approach" when drafting rules, days before the Commission is due to publish proposals on the issue.
"The question is how can it be done in a way that doesn't kill innovation, as well as continue to balance the benefits of AI with the risks it poses, as AI becomes more embedded in our lives?" she said. "It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone", he said. After being reappointed for a second term last fall with broad powers over digital technology policies, Vestager has now set its goal on artificial intelligence and is developing rules for its ethical use.
The Google chief said existing rules like Europe's privacy legislation GDPR and regulation for medical devices like AI-assisted heart monitors would serve as strong foundations for governing AI in some areas, but that for self-driving cars, governments would need to establish regulations.But Google has also come under intense criticism over how it handles users' privacy with some of its AI projects.