R S S : Google Blog

PageRank : 1 %

VoteRank :
(0 - 0 vote)

tagsTags: , , , , , , , , , , , , , , ,



Inside the Google team that dreams up colors

13 juillet[ —]

How do you bring a new color to life? Just ask Isabelle Olsson, who leads Google’s Color, Materials and Finish team. “Every year we work on hundreds of new colors, but maybe one or two make it,” she says. They dream up colors for things like Nest Minis and Pixel phones and develop them from scratch. Their goal is to create colors you’d love to see, not hide away in a cabinet or case. 

Copy of Isabelle_CMF_studio.jpg

Isabelle Olsson

Among the latest to make the cut can be found in the new Pixel Buds: Oh So Orange, Clearly White, Quite Mint and Almost Black. I recently spent time talking to Isabelle about why color is so important and where she finds inspiration—and of course, which Pixel Buds shade is her personal favorite. 

Where did your interest in design first come from?

There’s been one consistent thing I've always wanted to do, and that’s make people smile. When I was little, industrial design wasn’t a profession I was aware of, so I did things like stage design for plays, designing costumes and jewelry and building doll furniture. Eventually, when I went to art school, I found a way to combine my creative side with my problem-solving side, because I also loved math and physics. 

Nearly all of us have a favorite color, often starting when we’re little. Why do you think that is?

Color is the foundation for living. Look at flowers, some of which evolved to look bright to attract bees. There’s something about color that reminds us we are alive. And color is very personal, and so culturally specific to the setting and context we’re in. You even see different preferences depending on the climate you live in; if you’re in a hot climate you might prefer different colors than if you’re in a cooler climate. 

Electronics used to just be black…then black and white...then the occasional gray. What are some of the things that opened this space up to more variety? 

For a long time, tech for tech’s sake was enough, but I don’t think it’s enough anymore. There’s a reason when you go to a paint store there are literally hundreds of shades of white. We really believe that color, material and finish affect your wellbeing. 

Pixel Bud colors CMF studio

A look at a few sources of color inspiration the designers use.

At Google, we’ve set out to create products that fit into people's lives, and you just plainly can't do that without color. When we create our palette for the different product categories, we really think about where a product is going to live. Is it in your pocket or next to your bag, or is it going to live on a shelf or on that beautiful wooden cabinet you got from your grandma? We think about how we can fit in or stand out in that environment.

What are some color and finish trends you’ve noticed in electronics? 

There’s been this transition away from designing furniture to hide technology, like those media cabinets people shoved electronics in. Our goal is to design things that people are happy to have out in the open, that fit beautifully next to whatever vase you have, or a pair of earbuds you choose the same way you choose a jacket or a bag.

What real-world inspiration goes into color selection?

We try to live with the objects and the colors we design. For instance, when we design something for the home, be it a new color or a new shape, we place it on a shelf. Then every day for a week we walk past it, and we start seeing things we didn’t previously see. We don't just design something and look at it and then it’s done. We try to live with the objects and the colors. These days, we’re sending product models to our houses and living with them in our homes.

Google CMF studio

We also bring back objects from trips as inspiration. A toothbrush, a bar of soap, a little plate, a spoon—seriously, anything. Then in the studio, we have drawers for these things from all over the world organized by materials. We even have one that’s labeled “organic,” and that’s always fun to open because you never know what you’re going to find. Sometimes it’s stones but sometimes I’m like, What’s that smell? Then we use these objects to make physical mood boards. It’s this idea of turning off your logic brain and turning on your intuition side.

How do you make sure you don’t jump onto temporary color trends?

One thing we do is look at markets for longer-lasting products. It’s like furniture: It’s not like you buy a new couch every year, it’s maybe every five or 10 years. We can be inspired by fashion, but it's important to know that it can be a very quick cycle. It’s important we ask ourselves if something is a short term trend or a lasting movement. 

What was the process for choosing the Pixel Buds' colors?

We had this vision of this little dot floating in your ear. It’s almost like little candies, so we had bowls of candy in the studio for inspiration. 

Creating colors for something that goes on your body is so different from creating colors for something you hold in your hand or put on a shelf; it needs to coordinate with different hair styles, different skin tones and how people dress. We knew we could love a color when we looked at it, but what happens when it goes in the ear? We did a ton of prototyping and experimentation and then narrowed it down to around 100 colors, and then narrowed it down to 25. Then we tried them on a ton of people and photographed them, and we started to see some common themes of what worked in the ear and what just looked good on the table. 

For a while we had two dark neutrals and I thought, Wait a minute, that seems like a wasted opportunity. That’s how we brought back the green color, Quite Mint, which is my favorite and hadn't made the cut at first. 

I know there are different internal names for colors. What were some of the Pixel Buds’?

We called Quite Mint “pistachio,” which isn’t quite actually the right color but we liked the name! And I think we just called Oh So Orange “sun orange.” 

I think my favorite device color name is Purpleish for the Pixel 3a.

That’s my favorite name to this day because it felt so to the point! In some light, it’s purple, in some it’s sort of white, so it’s purple...ish. I loved it. 

Head to the Google Store  to check out the Pixel Buds colors, which are available next month. (Not all colors are available in all areas.)

A digital jobs program to help America's economic recovery

13 juillet[ —]

Technology has been a lifeline to help many small businesses during the COVID-19 crisis. And online tools can help people get new skills and find good-paying jobs. Nearly two-thirds of all new jobs created since 2010 require either high-level or medium-level digital skills. This presents a challenge for many job seekers, as well as to America’s long-term economic security. People need good jobs, and the broader economy needs their energy and skills to support our future growth. 

College degrees are out of reach for many Americans, and you shouldn’t need a college diploma to have economic security. We need new, accessible job-training solutions—from enhanced vocational programs to online education—to help America recover and rebuild.

Our Grow with Google initiative helps people get the skills they need to get a job or grow their business.  Today we’re announcing a new suite of Google Career Certificates that will help Americans get qualifications in high-paying high-growth job fields—no college degree required. We will fund 100,000 need-based scholarships and at Google we will consider our new career certificates as the equivalent of a four-year degree for related roles.  We’re also committing $10 million in job training grants for communities across America, working with partners like YWCA, NPower and JFF

Here are more details on today’s announcements: 

  • Three newGoogle Career Certificates in the high-paying, high-growth career fields of Data Analytics, Project Management, and User Experience (UX) Design. Like our IT Support and Automation in Python Certificates, these new career programs are designed and taught by Google employees who work in these fields. The programs equip participants with the essential skills they need to get a job. No degree or prior experience is required to take the courses. 

  • 100,000 need-based scholarships, funded by Google, to complete any of these career certificates. 

  • An expansion of our IT Certificate Employer Consortium, which currently includes over 50 employers like Walmart, Hulu, Sprint and of course Google.

  • Hundreds of apprenticeship opportunities at Google for people completing these career certificate programs to provide real on-the-job training.

  • The Google Career Certificates in Career and Technical Education high schools throughout America, starting with our IT Support Certificate this Fall. These certificates build on our established partnership with more than 100 community colleges. 

  • $10 million in grants to the YWCA, NPower and JFF to help workforce boards and nonprofits improve their job training programs and increase access to digital skills for women, veterans, and underserved Americans. As part of our Future of Work initiative, since 2017 Google.orghas provided over $200 million in grants to nonprofits working to promote economic opportunity. 

The new Google Career Certificates build on our existing programs to create pathways into IT Support careers for people without college degrees. Launched in 2018, the Google IT Certificate program has become the single most popular certificate on Coursera, and thousands of people have found new jobs and increased their earnings after completing the course. Take Yves Cooper, who enrolled in the program through our Grow with Google Partner, Merit America, while working as a van driver. Within five days of completing the program, he was offered a role as an IT helpdesk technician at a nonprofit in his hometown of Washington, D.C. We’re especially proud that the Google IT Certificate provides a pathway to jobs for groups that are underrepresented in the tech industry: 58 percent of IT Certificate learners identify as Black, Latino, female or veteran. 


Yves Cooper was offered a role as an IT helpdesk technician at a nonprofit after completing the Google IT Certificate program.

As America rebuilds our local communities, it’s important to start with the people that give them life. Since 2017, we’ve helped 5 million Americans learn digital skills through Grow with Google and we promise to do our part to help even more people prepare for jobs, creating economic opportunity for everyone.

Investing in India's digital future

13 juillet[ —]

Editor’s note: Today at Google for India, Google CEO Sundar Pichai announced a $10 billion Google for India Digitization Fund to help accelerate India’s digital economy. Below is an edited transcript of his remarks. Watch the full event here

Google for India is always a really special time, and I only wish I could be there in person.

One of the challenges of this moment is that we can’t visit the people we love, or the places we call home. During this time, the ability to be online has been a lifeline to so many of us: whether you’re a small business trying to stay afloat…a parent looking for updates on how to keep your family healthy…or a student trying to keep up with your studies. Being helpful in these moments is at the core of Google’s mission to organize the world’s information and make it universally accessible and useful. 

A new generation of Indian innovation

This mission is deeply personal to me. Growing up, technology provided a window to a world outside my own. It also brought us closer together as a family. Every evening we were drawn to the television by Doordarshan’s special rendition of “Sare Jahan Se Accha.” I tried to explain this to my colleagues the other day, but I eventually gave up and just showed it to them on YouTube. 

When I was young, every new piece of technology brought new opportunities to learn and grow. But I always had to wait for it to arrive from someplace else. Today, people in India no longer have to wait for technology to come to you. A whole new generation of technologies are happening in India first. 

It’s been incredible to see the rapid pace of change unfolding over my past few visits. From the excitement of young people using the latest apps and services, to the ways people are using smartphones to improve lives in rural villages, to the more than 2,500 Indian YouTube creators who each have over a million subscribers. I’ve actually turned to some of these creators for help over the last few weeks…for example, to learn how to make dishes like paneer makhani or pizza with my kids.

Indian small businesses go digital 

One of the most exciting success stories has been the digitization of small businesses. Just four years ago, only one-third of all small businesses in India had an online presence. Today, 26 million SMBs are now discoverable on Search and Maps, driving connections with more than 150 million users every month. What’s more, small merchants across the country are now equipped to accept digital payments. This has made it possible for more small businesses to become part of the formal economy, and it improves their access to credit.

The global pandemic has supercharged the adoption of digital tools. Digital payments, for example, have enabled families across India to access goods and services during lockdowns. For them, grocery delivery services have been invaluable—though I’m sure my grandmother misses haggling over the price of her vegetables in person.

When we build for India, we build for the world

Of course, all this progress was only possible because of a strong foundation of digital connectivity. Thanks to Prime Minister Modi’s vision for Digital India, the country has made huge progress in getting a billion Indians online. Low-cost smartphones combined with affordable data, and a world-class telecom infrastructure, have paved the way for new opportunities.

At Google, we’ve been proud to be a partner in India’s digitization journey since 2004, when we opened our first offices in Hyderabad and Bangalore. In those days, our focus was on getting helpful and relevant information to Indian users through Search, whether they were searching for their favorite Bollywood star or cricket scores.

From there, we spread awareness of the Internet in rural villages through programs like Internet Saathi. It’s helped more than 30 million women across India gain digital skills to improve their lives and their communities. 

Google’s efforts in India have deepened our understanding of how technology can be helpful to all different types of people. Building products for India first has helped us build better products for users everywhere. 

A recent example is GPay, our fast, simple way to pay contactless or online. Together with the rise in BHIM-UPI adoption, GPay makes it easy to pay the rickshawala, or send money to family back home. India is setting the global standard on how to digitize payments, and it’s now helping us build a global product.

Our AI-powered reading tutor app Bolo, now called Read Along, is another example of a technology built specifically for Indian users. Last year I visited with students in Mumbai who were using the app to learn to read on their own. It was amazing to see their excitement when they read a new word in Hindi for the first time. It received such positive reception, we’re rolling it out to the rest of the world—now children in 180 countries can learn to read in nine languages, with more to come. 

And our AI flood forecasting system was designed to keep people safe during India’s monsoon season. I’ll never forget how the 2015 floods in Chennai impacted so many families. I’m hopeful that this technology will bring greater peace of mind to the hundreds of millions of people affected by floods each year around the world.

Our next chapter of investment: Google for India Digitization Fund

These are just a few examples of how innovations that start in India can benefit the rest of the world. But India’s own digital journey is far from complete. There’s still more work to do in order to make the internet affordable and useful for a billion Indians…from improving voice input and computing for all of India’s languages, to inspiring and supporting a whole new generation of entrepreneurs.

Over the years, we’ve invested in many Indian businesses through Google, as well as through our growth equity investment fund CapitalG.

Today, I’m excited to announce the Google for India Digitization Fund. Through this effort, we will invest ₹75,000 crore, or approximately $10 billion, into India over the next 5-7 years. We’ll do this through a mix of equity investments, partnerships, and operational, infrastructure and ecosystem investments. This is a reflection of our confidence in the future of India and its digital economy.

Investments will focus on four areas important to India’s digitization: 

  • First, enabling affordable access and information for every Indian in their own language, whether it’s Hindi, Tamil, Punjabi or any other

  • Second, building new products and services that are deeply relevant to India’s unique needs

  • Third, empowering businesses as they continue or embark on their digital transformation

  • Fourth, leveraging technology and AI for social good, in areas like health, education, and agriculture

As we make these investments, we look forward to working alongside Prime Minister Modi and the Indian government, as well as Indian businesses of all sizes to realize our shared vision for a Digital India.

There’s no question we are facing a difficult moment today, in India and around the world. The dual challenges to our health and to our economies have forced us to rethink how we work and how we live. But times of challenge can lead to incredible moments of innovation. Our goal is to ensure India not only benefits from the next wave of innovation, but leads it. Working together we can ensure that our best days are still ahead.  

Tools for language access during COVID-19

10 juillet[ —]

Translation services make it easier to communicate with someone who doesn’t speak the same language, whether you’re traveling abroad or living in a new country. But in the context of a global pandemic, government and health officials urgently need to deliver vital information to their communities, and every member of the community needs access to information in a language they understand. In the U.S. alone, that means reaching 51 million migrants in at least 350 languages, with information ranging from how to keep people and their families safe, to financial, employment or food resources.

To better understand the challenges in addressing these translation needs, we conducted a research study, and interviewed health and government officials responsible for disseminating critical information. We assessed the current shortcomings in providing this information in the relevant languages, and how translation tools could help mitigate them.

The struggle for language access 

When organizations—from health departments to government agencies—update information on a website, it needs to be quickly accessible in a wide variety of languages. We learned that these organizations are struggling to keep up with the high volume of rapidly-changing content and lack the resources to translate this content into the needed languages. 

Officials, who are already spread thin, can barely keep up with the many updates surrounding COVID-19—from the evolving scientific understanding, to daily policy amendments, to new resources for the public. Nearly all new information is coming in as PDFs several times a day, and many officials report not being able to offer professional translation for all needed languages. This is where machine translation can serve as a useful tool.  

How machine translation can help

Machine translation is an automated way to translate text or speech from one language to another. It can take volumes of data and provide translations into a large number of supported languages. Although not intended to fully replace human translators, it can provide value when immediate translations are needed for a wide variety of languages.

If you're looking to translate content on the web, you have several options.

Use your browser

Many popular browsers offer translation capabilities, which are either built in (e.g. Chrome) or require installing an add-on or extension (e.g. Microsoft Edge or Firefox). To translate web content in Chrome, all you have to do is go to a webpage in another language, then click “Translate” at the top.

Use a website translation widget

If you are a webmaster of a government, non-profit, and/or non-commercial website (e.g. academic institutions), you may be eligible to sign up for the Google Translate Website Translator widget. This tool translates web page content into 100+ different languages. To find out more, please visit the webmasters blog.

Upload PDFs and documents

Google Translate supports translating many different document formats (.doc, .docx, .odf, .pdf, .ppt, .pptx, .ps, .rtf, .txt, .xls, .xlsx). By simply uploading the document, you can get a translated version in the language that you choose.

Millions of people need translations of resources at this time. Google’s researchers, designers and product developers are listening. We are continuously looking for ways to improve our products and come to people’s aid as we navigate the pandemic. 

New predictive capabilities in Google Analytics

10 juillet[ —]

Google Analytics helps you measure the actions people take across your app and website. By applying Google’s machine learning models, Analytics can analyze your data and predict future actions people may take. Today we are introducing two new predictive metrics to App + Web properties. The first is Purchase Probability, which predicts the likelihood that users who have visited your app or site will purchase in the next seven days. And the second, Churn Probability, predicts how likely it is that recently active users will not visit your app or site in the next seven days. You can use these metrics to help drive growth for your business by reaching the people most likely to purchase and retaining the people who might not return to your app or site via Google Ads.

Reach predictive audiences in Google Ads

Analytics will now suggest new predictive audiences that you can create in the Audience Builder. For example, using Purchase Probability, we will suggest the audience “Likely 7-day purchasers” which includes users who are most likely to purchase in the next seven days. Or using Churn Probability, we will suggest the audience “Likely 7-day churning users” which includes active users who are not likely to visit your site or app in the next seven days.


In the Audience Builder, you can select from a set of suggested predictive audiences.

In the past, if you wanted to reach people most likely to purchase, you’d probably build an audience of people who had added products to their shopping carts but didn’t purchase. However, with this approach you might miss reaching people who never selected an item but are likely to purchase in the future. Predictive audiences automatically determine which customer actions on your app or site might lead to a purchase—helping you find more people who are likely to convert at scale.

Imagine you run a home improvement store and are trying to drive more digital sales this month. Analytics will now suggest an audience that includes everyone who is likely to purchase in the next seven days—on either your app or your site—and then you can reach them with a personalized message using Google Ads.

Or let’s say you’re an online publisher and want to maintain your average number of daily users. You can build an audience of users who are likely to not visit your app or site in the next seven days and then create a Google Ads campaign to encourage them to read one of your popular articles.

Analyze customer activity with predictive metrics

In addition to building audiences, you can also use predictive metrics to analyze your data with the Analysis module. For example, you can use the User Lifetime technique to identify which marketing campaign helped you acquire users with the highest Purchase Probability. With that information you may decide to reallocate more of your marketing budget towards that high potential campaign.

predictions2 (1).gif

View the Purchase Probability of users from various marketing campaigns.

You will soon be able to use predictive metrics in the App + Web properties beta to build audiences and help you determine how to optimize your marketing budget. In the coming weeks these metrics will become available in properties that have purchase events implemented or are automatically measuring in-app purchases once certain thresholds are met.

If you haven't yet created an App + Web property, you can get started here. We recommend continuing to  use your existing Analytics properties alongside an App + Web property.

An update on our work on AI and responsible innovation

9 juillet[ —]

AI is a powerful tool that will have a significant impact on society for many years to come, from improving sustainability around the globe to advancing the accuracy of disease screenings. As a leader in AI, we’ve always prioritized the importance of understanding its societal implications and developing it in a way that gets it right for everyone. 

That’s why we first published our AI Principles two years ago and why we continue to provide regular updates on our work. As our CEO Sundar Pichai said in January, developing AI responsibly and with social benefit in mind can help avoid significant challenges and increase the potential to improve billions of lives. 

The world has changed a lot since January, and in many ways our Principles have become even more important to the work of our researchers and product teams. As we develop AI we are committed to testing safety, measuring social benefits, and building strong privacy protections into products. Our Principles give us a clear framework for the kinds of AI applications we will not design or deploy, like those that violate human rights or enable surveillance that violates international norms. For example, we were the first major company to have decided, several years ago, not to make general-purpose facial recognition commercially available.

Over the last 12 months, we’ve shared our point of view on how to develop AI responsibly—see our 2019 annual report and our recent submission to the European Commission’s Consultation on Artificial Intelligence. This year, we’ve also expanded our internal education programs, applied our principles to our tools and research, continued to refine our comprehensive review process, and engaged with external stakeholders around the world, while identifying emerging trends and patterns in AI. 

Building on previous AI Principles updates we shared here on the Keyword in 2018 and 2019, here’s our latest overview of what we’ve learned, and how we’re applying these learnings in practice.

Internal education

In addition to launching the initial Tech Ethics training that 800+ Googlers have taken since its launch last year, this year we developed a new training for AI Principles issue spotting. We piloted the course with more than 2,000 Googlers, and it is now available as an online self-study course to all Googlers across the company. The course coaches employees on asking critical questions to spot potential ethical issues, such as whether an AI application might lead to economic or educational exclusion, or cause physical, psychological, social or environmental harm. We recently released a version of this training as a mandatory course for customer-facing Cloud teams and 5,000 Cloud employees have already taken it.

Tools and research

Our researchers are working on computer science and technology not just for today, but for tomorrow as well. They continue to play a leading role in the field, publishing more than 200 academic papers and articles in the last year on new methods for putting our principles into practice. These publications address technical approaches to fairness, safety, privacy, and accountability to people, including effective techniques for improving fairness in machine learning at scale, a method for incorporating ethical principles into a machine-learned model, and design principles for interpretable machine learning systems.

Over the last year, a team of Google researchers and collaborators published an academic paper proposing a framework called Model Cards that’s similar to a food nutrition label and designed to report an AI model’s intent of use, and its performance for people from a variety of backgrounds. We’ve applied this research by releasing Model Cards for Face Detection and Object Detection models used in Google Cloud’s Vision API product.

Our goal is for Google to be a helpful partner not only to researchers and developers who are building AI applications, but also to the billions of people who use them in everyday products. We’ve gone a step further, releasing 14 new tools that help explain how responsible AI works, from simple data visualizations on algorithmic bias for general audiences to Explainable AIdashboards and tool suites for enterprise users. You’ll find a number of these within our new Responsible AI with TensorFlow toolkit.

Review process 

As we’ve shared previously, Google has a central, dedicated team that reviews proposals for AI research and applications for alignment with our principles. Operationalizing the AI Principles is challenging work. Our review process is iterative, and we continue to refine and improve our assessments as advanced technologies emerge and evolve. The team also consults with internal domain experts in machine-learning fairness, security, privacy, human rights, and other areas. 

Whenever relevant, we conduct additional expert human rights assessments of new products in our review process, before launch. For example, we enlisted the nonprofit organization BSR (Business for Social Responsibility) to conduct a formal human rights assessment of the new Celebrity Recognition tool, offered within Google Cloud Vision and Video Intelligence products. BSR applied the UN’s Guiding Principles on Business and Human Rights as a framework to guide the product team to consider the product’s implications across people’s privacy and freedom of expression, as well as potential harms that could result, such as discrimination. This assessment informed not only the product’s design, but also the policies around its use. 

In addition, because any robust evaluation of AI needs to consider not just technical methods but also social context(s), we consult a wider spectrum of perspectives to inform our AI review process, including social scientists and Google’s employee resource groups.

As one example, consider how we’ve built upon learnings from a case we published in our last AI Principles update: the review of academic research on text-to-speech (TTS) technology. Since then, we have applied what we learned in that earlier review to establish a Google-wide approach to TTS. Google Cloud’s Text-to-Speech service, used in products such as Google Lens, puts this approach into practice.

Because TTS could be used across a variety of products, a group of senior Google technical and business leads were consulted. They considered the proposal against our AI Principles of being socially beneficial and accountable to people, as well as the need to incorporate privacy by design and avoiding technologies that cause or are likely to cause overall harm.

  • Reviewers identified the benefits of an improved user interface for various products, and significant accessibility benefits for people with hearing impairments. 

  • They considered the risks of voice mimicry and impersonation, media manipulation, and defamation.

  • They took into account how an AI model is used, and recognized the importance of adding layers of barriers for potential bad actors, to make harmful outcomes less likely.

  • They recommended on-device privacy and security precautions that serve as barriers to misuse, reducing the risk of overall harm from use of TTS technology for nefarious purposes.  

  • The reviewers recommended approving TTS technology for use in our products, but only with user consent and on-device privacy and security measures.

  • They did not approve open-sourcing of TTS models, due to the risk that someone might misuse them to build harmful deepfakes and distribute misinformation. 

Text to Speech.jpg

External engagement

To increase the number and variety of outside perspectives, this year we launched the Equitable AI Research Roundtable, which brings together advocates for communities of people who are currently underrepresented in the technology industry, and who are most likely to be impacted by the consequences of AI and advanced technology. This group of community-based, non-profit leaders and academics meet with us quarterly to discuss AI ethics issues, and learnings from these discussions help shape operational efforts and decision-making frameworks. 

Our global efforts this year included new programs to support non-technical audiences in their understanding of, and participation in, the creation of responsible AI systems, whether they are policymakers, first-time ML (machine learning) practitioners or domain experts. These included:


  • Partnering with Yielding Accomplished African Women to implement the first-ever Women in Machine Learning Conference in Africa. We built a network of 1,250 female machine learning engineers from six different African countries. Using the Google Cloud Platform, we trained and certified 100 women at the conference in Accra, Ghana. More than 30 universities and 50 companies and organizations were represented. The conference schedule included workshops on Qwiklabs, AutoML, TensorFlow, human-centered approach to AI, mindfulness and #IamRemarkable

  • Releasing, in partnership with the Ministry of Public Health in Thailand, the first studyof its kind on how researchers apply nurses' and patients' input to make recommendations on future AI applications, based on how nurses deployed a new AI system to screen patients for diabetic retinopathy. 

  • Launching an ML workshop for policymakers featuring content and case studies covering the topics of Explainability, Fairness, Privacy, and Security. We’ve run this workshop, via Google Meet, with over 80 participants in the policy space with more workshops planned for the remainder of the year. 

  • Hosting the PAIR (People + AI Research) Symposium in London, which focused on participatory ML and marked PAIR’s expansion to the EMEA region. The event drew 160 attendees across academia, industry, engineering, and design, and featured cross-disciplinary discussions on human-centered AI and hands-on demos of ML Fairness and interpretability tools. 

We remain committed to external, cross-stakeholder collaboration. We continue to serve on the board and as a member of the Partnership on AI, a multi-stakeholder organization that studies and formulates best practices on AI technologies. As an example of our work together, the Partnership on AI is developing best practices that draw from our Model Cards proposal as a framework for accountability among its member organizations. 

Trends, technologies and patterns emerging in AI

We know no system, whether human or AI powered, will ever be perfect, so we don’t consider the task of improving it to ever be finished. We continue to identify emerging trends and challenges that surface in our AI Principles reviews. These prompt us to ask questions such as when and how to responsibly develop synthetic media, keep humans in an appropriate loop of AI decisions, launch products with strong fairness metrics, deploy affective technologies, and offer explanations on how AI works, within products themselves. 

As Sundar wrote in January, it’s crucial that companies like ours not only build promising new technologies, but also harness them for good—and make them available for everyone. This is why we believe regulation can offer helpful guidelines for AI innovation, and why we share our principled approach to applying AI. As we continue to responsibly develop and use AI to benefit people and society, we look forward to continuing to update you on specific actions we’re taking, and on our progress.

Here’s to you: 15 years of Google Earth stories

9 juillet[ —]

We’ve always said that if Google Maps is about finding your way, Google Earth is about getting lost. With Google Earth, you can see our planet like an astronaut from space, then travel anywhere on it in seconds with a click or tap. Even after an entire afternoon exploring cities, landscapes and stories on Google Earth, you'll have barely scratched the surface.

Now 15 years old, Google Earth is still the world’s biggest publicly accessible repository of geographic imagery. It combines aerial photography, satellite imagery, 3D topography, geographic data, and Street View into a tapestry you can explore. But Google Earth is much more than a 3D digital globe. The underlying technology has democratized mapmaking allowing anyone to better understand our world, and take action to create positive change.

Of the billions of people who have used Google Earth over the years, here are 15 stories that have inspired us:

1. Responding to natural disasters. Two months after Google Earth launched, we quickly realized that people were not just using it to plan their vacations. Hurricane Katrina hit the Gulf Coast in August 2005, and the Google Earth team quickly worked with the National Oceanic and Atmospheric Administration (NOAA) to make updated imagery available to first responders on the ground to support rescue efforts, relief operations and understand the hurricane’s impact.

Hurricane Katrina Google Earth imagery

Hurricane Katrina imagery in Google Earth helped support rescue efforts, relief operations and understand the hurricane’s impact.

2. Taking virtual field trips. In 2006, former English teacher, Jerome Burg, first used Google Earth to create Lit Trips, tours that follow the journeys of literature’s well-known characters. Today the project includes more than 80 Lit Trips for teachers and students of all grade levels. Each tour includes thought-provoking discussion starters, classroom resources and enrichment activities.

Walk Two Moons in Google Earth

This Lit Trip brought the classic young adult novel, Walk Two Moons, to life in Google Earth.

3. Protecting culture. When Chief Almir of the Suruí people first glimpsed Google Earth on a visit to an Internet cafe, the indigenous leader immediately grasped its potential as a tool for conserving his people’s traditions. In 2007, Chief Almir traveled thousands of miles from the Brazilian Amazon to Google headquarters to invite Google to train his community to use Google Earth. The Suruí people went on to build their Cultural Map on Google Earth which included hundreds of cultural sites of significance in their rainforest.

The Surui Cultural Map shows the Surui tribe of the Amazon's vision of their forest, including their territory and traditional history.

The Surui Cultural Map shows the Surui tribe of the Amazon's vision of their forest, including their territory and traditional history.

4. Decoding animal behaviors. In 2008, German and Czech researchers used Google Earth to look at 8,510 domestic cattle in 308 pastures across six continents. The images led them to make the amazing discovery that certain species of cattle and deer align themselves to the magnetic poles while grazing or resting.

Cows in Google Earth imagery

Scientists used Google Earth to find which species of cattle and deer align themselves to the magnetic poles

5. Reuniting families. Saroo Brierley was accidentally separated from his family at the age of five and  ended up in an orphanage. Luckily, Saroo was adopted by a loving family in Australia. As an adult, Saroo was curious about his origins and painstakingly traced his way back home to India using the satellite imagery in Google Earth. He was able to reunite with his biological mother in 2011 after 25 years apart. View the story in Google Earth.

Saroo Brierley found his childhood home after being lost for 25 years. Follow along as Saroo walks through just how he did it.

Saroo Brierley found his childhood home after being lost for 25 years. Follow along as Saroo walks through just how he did it.

6. Helping communities impacted by war. The HALO Trust—the world's oldest, largest and most successful humanitarian landmine clearance agency—uses Google Earth to identify and map mined areas. The HALO Trust has cleared 1.8 million landmines, 11.9 million items of other explosive remnants of war and 57.2 million items of small arms munitions in 26 countries and territories around the world. 

Two HALO staff in Nagorno Karabakh studying the minefields with Google Earth

Two HALO staff in Nagorno Karabakh studying minefields with Google Earth.

7. Protecting elephants from poachers:To protect elephants from poachers seeking their ivory tusks, Save the Elephants built an elephant tracking system. Starting in 2009, they have outfitted hundreds of elephants with satellite collars to track their movements in real time on Google Earth. Their partner organizations, including rangers at the Lewa Wildlife Conservancy, use Google Earth in the fight against elephant poachers across the conservancy and privately owned rangelands in Kenya.

Google & Save the Elephants partner to raise awareness about African elephants

Google & Save the Elephants partnered to raise awareness about African elephants

8. Discovering unknown forests. Dr. Julian Bayliss used Google Earth to explore high-altitude rainforests in Africa. For almost as long as Google Earth has existed, Dr. Bayliss has been systematically flying over northern Mozambique in Google Earth and scanning the satellite imagery. One day he came across what appeared to be a mountaintop rainforest. His virtual discovery set off a chain of events that led to the discovery of an untouched rainforest ecosystem atop Mount Lico in 2018.

Mount Lico in Google Earth

An untouched, mountain-top rainforest ecosystem is discovered with Google Earth.

9. Supporting students in rural classrooms. Padmaja Sathyamoorthy and others who work at the India Literacy Project (ILP) use Google Earth to build interactive content for rural classrooms, helping improve literacy for 745,000 students across India. Padmaja says, “ILP has made history and geography come alive with new tools and media content that capture the imagination of young minds. The project expands students’ horizons. It’s not just about learning curriculum from a textbook. I believe it creates a curiosity and a love for learning that will last a lifetime.”

The  India Literacy Project uses Google Earth to build interactive content for rural classrooms
The India Literacy Project uses Google Earth to build interactive content for rural classrooms.

10. Inspiring positive environmental change. The nonprofit organization,  HAkA, used Google Earth to show threats to the Leuser Ecosystem, the last place on Earth where orangutans, rhinos, elephants and tigers coexist in the wild. This Google Earth tour helped raise awareness about the region and incited positive changes in the area.

Threats to the Leuser Ecosystem in Google Earth

HAkA's Google Earth tours have helped raise awareness about ecosystem threats in Indonesia.

11. Falling more in love with our planet. Google Earth VR, which was released in 2016, gave people the chance to see the Earth from a new perspective. Whether they experienced the overview effect or toured far flung locations, one thing remained constant — people couldn’t get enough.

Google Earth VR flyover

You can soar over mountains with Google Earth VR.

12. Celebrating global language diversity. In 2019, Tania Haerekiterā Tapueluelu Wolfgramm, a Māori and Tongan woman traveled across the Pacific ocean to interview and record the speakers of 10 different Indigenous languages for Google Earth. The project featured 50 Indigenous language speakers from around the world in honor of the 2019 International Year of Indigenous Languages.

Indigenous languages in Google Earth

Hear 50 indigenous language speakers in Google Earth.

13. Catching (fictional) super thieves. People around the world followed the trail of Carmen Sandiego and the V.I.L.E. operatives by solving the three capers launched in Google Earth in 2019.


14. Telling more compelling news stories. Journalists have long used the rich imagery in Google Earth to create more engaging stories. Vox Video used Google Earth Studio to tell the story of how the Event Horizon telescope collected 54-million-year-old photons to take the first ever picture of a black hole.

What it took to collect these 54-million-year-old photons from a supermassive black hole.

See what it took to collect these 54-million-year-old photons from a supermassive black hole.

15. Homecoming during COVID-19. During Golden Week in Japan, most people visit their hometowns, but this year that wasn’t possible due to COVID-19. To help homesick natives, a group from Morioka city developed a tour in Google Earth that let people virtually take the bullet train to Morioka station and visit beloved locations in the city.

Morioka city tour in Google Earth.png

Travel to charming Morioka, Japan in Google Earth, whether you're a traveller or native far from home.

A big thank you to everyone for being with us on this journey. Our hope is that Google Earth will continue to inspire curiosity and move us to care more deeply about our beautiful planet and all who live here. We look forward to seeing what the next 15 years brings!

Ask a Techspert: How do machine learning models explain themselves?

9 juillet[ —]

Editor’s Note: Do you ever feel like a fish out of water? Try being a tech novice and talking to an engineer at a place like Google. Ask a Techspert is a series on the Keyword asking Googler experts to explain complicated technology for the rest of us. This isn’t meant to be comprehensive, but just enough to make you sound smart at a dinner party. 

A few years ago, I learned that a translation from Finnish to English using Google Translate led to an unexpected outcome. The sentence “hän on lentäjä” became “he is a pilot” in English, even though “hän” is a gender-neutral word in Finnish. Why did Translate assume it was “he” as the default? 

As I started looking into it, I became aware that just like humans, machines are affected by society’s biases. The machine learning model for Translate relied on training data, which consisted of the input from hundreds of millions of already-translated examples from the web. “He” was more associated with some professions than “she” was, and vice versa. 

Now, Google provides options for both feminine and masculine translations when adapting gender-neutral words in several languages, and there’s a continued effort to roll it out more broadly. But it’s still a good example of how machine learning can reflect the biases we see all around us. Thankfully, there are teams at Google dedicated to finding human-centered solutions to making technology inclusive for everyone. I sat down with Been Kim, a Google researcher working on the People + AI Research (PAIR) team, who devotes her time to making sure artificial intelligence puts people, not machines, at its center, and helping others understand the full spectrum of human interaction with machine intelligence. We talked about how you make machine learning models easy to interpret and understand, and why it’s important for everybody to have a basic idea of how the technology works.

Been Kim

Why is this field of work so important?

Machine learning is such a powerful tool, and because of that, you want to make sure you’re using it responsibly. Let’s take an electric machine saw as an example. It’s a super powerful tool, but you need to learn how to use it in order not to cut your fingers. Once you learn, it’s so useful and efficient that you’ll never want to go back to using a hand saw. And the same goes for machine learning. We want to help you understand and use machine learning correctly, fairly and safely. 

Since machine learning is used in our everyday lives, it’s also important for everyone to understand how it impacts us. No matter whether you’re a coffee shop owner using machine learning to optimize the purchase of your beans based on seasonal trends, or your doctor diagnoses you with a disease with the help of this technology, it’s often crucial to understand why a machine learning model has produced the outcome it has. It’s also important for developers and decision-makers to be able to explain or present a machine learning model to people in order to do so. This is what we call “interpretability.” 

How do you make machine learning models easier to understand and interpret? 

There are many different ways to make an ML model easier to understand. One way is to make the model reflect how humans think from the start, and have the model "trained" to provide explanations along with predictions, meaning when it gives you an outcome, it also has to explain how it got there. 

Another way is to try and explain a model after the training on data is done. This is something you can do when the model has been built to use input to provide an output from its own perspective, optimizing for prediction, without a clear “how” included. This means you're able to plug things into it and see what comes out, and that can give you some insight into how the model generally makes decisions, but you don't necessarily know exactly how specific inputs are interpreted by the model in specific cases. 

One way to try and explain models after they’ve been trained is using low level features or high level concepts. Let me give you an example of what this means. Imagine a system that classifies pictures: you give it a picture and it says, “This is a cat.” A low level feature is when I then ask the machine which pixels mattered for that prediction, it can tell us if it was one pixel or the other, and we might be able to see that the pixels in question show the cat’s whiskers. But we might also see that it is a scattering of pixels that don’t appear meaningful to the human eye, or that it’s made the wrong interpretation. High level concepts are more similar to the way humans communicate with one another. Instead of asking about pixels, I’d ask, “Did the whiskers matter for the prediction? or the paws?” and again, the machine can show me what imagery led it to reach this conclusion. Based on the outcome, I can understand the model better. (Together with researchers from Stanford, we’ve published papers that go into further detail on this for those who are interested.)

Can machines understand some things that we humans can’t? 

Yes! This is an area that I am very interested in myself. I am currently working on a way to showcase how technology can help humans learn new things. Machine learning technology is better at some things than we are; for example it can analyze and interpret data at a much larger scale than humans can. Leveraging this technology, I believe we can enlighten human scientists with knowledge they haven't previously been aware of. 

What do you need to be careful of when you’re making conclusions based on machine learning models?

First of all, we have to be careful that human bias doesn't come into play. Humans carry biases that we simply cannot help and are often unaware of, so if an explanation is up to a human’s interpretation, and often it is, then we have a problem. Humans read what they want to read. Now, this doesn’t mean that you should remove humans from the loop. Humans communicate with machines, and vice versa. Machines need to communicate their outcomes in the form of a clear statement using quantitative data, not one that is vague and completely open for interpretation. If the latter happens, then the machine hasn’t done a very good job and the human isn’t able to provide good feedback to the machine. It could also be that the outcome simply lacks additional context only the human can provide, or that it could benefit from having caveats, in order for them to make an informed judgement about the results of the model. 

What are some of the main challenges of this work? 

Well, one of the challenges for computer scientists in this field is dealing with non mathematical objectives, which are things you might want to optimize for, but don’t have an equation for. You can’t always define what is good for humans using math. That requires us to test and evaluate methods with rigor, and have a table full of different people to discuss the outcome. Another thing has to do with complexity. Humans are so complex that we have a whole field of work - psychology - to study this. So in my work, we don't just have computational challenges, but also complex humans that we have to consider. Value-based questions such as “what defines fairness?” are even harder. They require interdisciplinary collaboration, and a diverse group of people in the room to discuss each individual matter.

What's the most exciting part? 

I think interpretability research and methods are making a huge impact. Machine learning technology is a powerful tool that will transform society as we know it, and helping others to use it safely is very rewarding. 

On a more personal note, I come from South Korea and grew up in circumstances where I feel I didn’t have too many opportunities. I was incredibly lucky to get a scholarship to MIT and come to the U.S. When I think about the people who haven't had these opportunities to be educated in science or machine learning, and knowing that this machine learning technology can really help and be useful to them in their everyday lives if they use it safely, I feel really motivated to be working on democratizing this technology. There's many ways to do it, and interpretability is one of the things that I can contribute with.  

Create 3D Immersive Ad Experiences on Display & Video 360

9 juillet[ —]

Last year we announced the beta launch of Swirl, an immersive 3D ad format that allows people to interact with an ad in a new way. Today, we are making Swirl available to all Display & Video 360 customers globally.

Swirl lets consumers engage with a product like it's right in front of them by allowing them to rotate, zoom and expand the creative in the ad. Swirl ads allow brands to illustrate changes in behavior, new technology performance, unique product features and more.

Brands are seeing great success using Swirl ads for various campaign goals. This includes increasing consideration by showcasing product features, building brand awareness and delivering a great mobile experience. See how some brands utilized the power of Swirl over the past year.

Purina ONE attracts new customers using 3D Swirl ads

Purina wanted to show dog and cat owners how their Purina ONE pet food can make a difference in the health of their pets in just 28 days. To accomplish this, Purina partnered with creative agency Mediamonks to create a Swirl ad that demonstrates the behaviors of a healthy pet that eats Purina ONE food. The ad uniquely allows the user to “play fetch” with a 3D dog in the ad, so that consumers can visualize the same behavior in their pet. The use of this format led to a 6X increase in engagement (vs 2D ads) through its interactive experience.


Nissan showcases new automotive technologies with 3D ads

Nissan Spain was looking to increase the engagement of users with their creatives for their compact crossover SUV, the Qashaqai. The creatives would highlight the new driving features that were difficult to show in a static display ad. Nissan Spain and their agency Nissan United partnered with the creative agency QReal to create Swirl ads. They built the 3D creative to showcase new features like Lane Intervention, Intelligent Cruise Control and Lane Keep Assist. Nissan saw an engagement rate that was 8X higher than rich media benchmarks for the automotive vertical.


adidas Latin America highlights innovative product design

adidas LATAM was looking to capture user attention and showcase the innovative redesign of the adidas Ultra Boost 2019 shoe. adidas LATAM and creative agency EdgeDNA created Swirl ads that allow prospective buyers to get an up-close look without having to visit a store. Swirl ads drove a 4x higher engagement rate than rich media benchmarks and had an average viewable time of 11 seconds, which indicated that the ad captured user attention. The 3D creatives also drove a return on ad spend (ROAS) of ~2.8 for the Colombia market.


Belvedere drives brand engagement during holiday season with 3D ad

Belvedere Vodka, a premium spirit, wanted to drive awareness and visibility of their gift sets during the holiday season and increase consideration among consumers. Belvedere partnered with production agency Mediamonks to develop holiday themed creatives using Swirl ads. These 3D ads unveiled Belvedere’s gift set, including a bottle, gift bag, and martini glass. The ad enabled users to virtually view the gift set as if they were picking it out themselves in the store. The Swirl ads drove 6.5x higher brand favorability and 4.9x higher purchase intent vs. category norms, according to a brand equity measurement study by Kantar.


To get started creating a Swirl ad, you can upload pre-made 3D assets to Google Web Designer, our creative authoring tool, and use the new Swirl templates. Brands and agencies can also edit, configure, and publish high-quality, photorealistic models to use in immersive display ad using Google’s 3D platform Poly.

For resources and tips on how to get started using Swirl, you can find more information here.

Cloud Covered: What was new in Google Cloud in June

8 juillet[ —]

Summer arrived here in the U.S. last month, so school's out—but people are still using technology to connect, play and learn. Here’s what was new.

Find time to play (not just work) at home.

Remote fun, not just remote work, is essential these days. Last month, we announced that our gaming solution for Google Maps Platform is now open to all mobile game developers to create immersive games. You can now quickly build mobile games with Google Maps Platform using easy-to-access developer tools to create games with real-world locations and gameplay. There are already some fun real-world games created that include hatching dinosaurs, birdwatching and more.

Discover more ways to Meet with new features.

Our premium video conferencing product Google Meet is still available for free to anyone with an email address. Last month, we announced new features as we keep improving Meet, including availability on the Nest Hub Max, layout improvements so you can see up to 16 participants and content being shared, and hardware updates. Other upcoming features include Hey Google voice control so you can start meetings without touching hardware, and replaceable backgrounds.

Learn new things at home, whether for grown-ups or kids.

New Meet features specifically for educators are slated to launch later this year. More than 140 million educators and students use G Suite for Education, and these new features are designed to improve capabilities for moderating meetings, and engagement in remote or hybrid learning environments. These new features include hand raising, attendance tracking, and many more.

Digital cloud learning continues all summer.

Our Google Cloud training and certifications team launched several new initiatives, including Google Cloud skill badges, new certification prep learning journeys, and remote certification exam availability. You can get the first month of the certification prep training at no cost, and 30 days of unlimited Qwiklabs access too. If you’re looking for more ways to learn this summer, check out our Next ‘20: OnAir lineup, starting July 14. New content arrives each week, with themed weeks on cloud topics from application modernization and data analytics to diversity and inclusion in technology.

How does your cloud grow? With new regions.

We virtually celebrated the launch of Google Cloud’s new physical Jakarta region last month. It’s the first Google Cloud region in Indonesia—one of the fastest growing economies in the world—and ninth in Asia Pacific. A region means that those storing and using data in that region can access it faster than if it was stored farther away.

That’s all for June. Keep up to date on the Google Cloud blog any time.

0 | 10
 is the best way to tune in to the Web.

Chercher, découvrir, news, podcast francais, radios, webtv, videos. Vous trouverez du contenu du Monde entier et de la France. Vous pourrez créer votre propre contenu et le partager avec vos amis.

ACCEUIL add podcastAjouter votre Podcast FORUM By Jordi Mir & mirPod since April 2005....
A PROPOS Supporter lequipe mirPod Terms of Use BLOG OnlyFamousPeople MIRTWITTER