R S S : Google Blog

PageRank : 2 %

VoteRank :
(0 - 0 vote)

tagsTags: , , , , , , , , , , , , , , ,



The making of “Pearl”

24 February[ —]

Spotlight Stories' “Pearl” follows a father and daughter as they travel the country in their beloved hatchback, chasing their dreams. Created and produced as an interactive VR experience, a 360 video, and a theatrical short film, “Pearl” premiered last summer at the TriBeCa film festival, and is nominated this year for an Oscar for best animated short film.

With the Oscars just a few days away, we asked Director Patrick Osborne, Producer David Eisenmann, Music and Sound Creative Director Scot Stafford, and Technical Art Lead Cassidy Curtis to reflect on the journey of “Pearl.” You can watch “Pearl” on the YouTube app, on Daydream through the YouTube VR app, on the Google Spotlight Stories app for iOS and Android, or on HTC Vive.

Patrick Osborne, Director

My father is an artist and has worked as a toy designer. He loved to draw. He sacrificed a lot, as most parents do, in order to provide the best life for me and my brothers. One of those sacrifices was choosing family over career. “Pearl” was inspired by our relationship. Parents give us much more than material things—they give us taste, passion, their time. The time I spent drawing with my dad as a kid set up a foundation for the career I have today.


I think of “Pearl” as a folk-roadtrip-VR-musical. In 360 and VR, you’re creating a film without the constraint of borders, edges or a frame or control over timing.  That means the story is happening all around you, and the audience is free to look anywhere at any time. As a director, giving that control to the audience was a scary prospect.

I had to figure out how to tell a story that spanned decades without the typical editing cuts you experience in a traditional film, which make it easy to understand that time has passed. In order to tell this story the way I had envisioned it, I had to truncate time and transport the audience from scene to scene. I made the car the focal point of the story, used the car’s windows to frame and compose shots, and put the audience in the passenger seat.

David Eisenmann, Producer

Pearl is a single story made for several mediums at once: as a 2D theatrical film, a 360º interactive story, and fully immersive VR. All of these versions were built from the same core of story, animation, sound and music, yet to make the best possible version for each medium, we had to make different choices along the way. For example, the rhythm of editing from shot to shot was much quicker in 2D than in VR, with almost twice as many cuts between scenes. Working with Evil Eye Pictures, we used each medium’s strengths to help the others: to create the 2D version, Patrick actually “shot” the scenes in 360, using the mobile phone as a camera. Editor Stevan Riley assembled the film from this footage, much as he would do with one of his documentaries.

The result is a rare opportunity to see how one filmmaker tells the same story in all these different mediums. While the VR version feels like being there in the passenger seat with the characters, the theatrical version is more like watching their home movies. Different forms of intimacy, but they all bring you closer to these characters’ lives. 

Patrick Osborne, Director

As a fan of modern folk and Americana trends in music, I jumped at the chance to wrap the story in a song. "No Wrong Way Home" perfectly complements the visual style of the film, and the lyrics and imagery leave room for the audience to see and hear a little bit of themselves in our story.

Scot Stafford, Music and Sound Creative Director

Patrick wanted the story to evolve through music and for the song to be passed from father to daughter, along with the car. After an extensive search for songwriters, he chose Alexis Harte and JJ Wiesler for their sketch that contained the refrain, “there’s no wrong way home.”  It matched perfectly with Patrick’s vision and his early sketches.

JJ Wiesler, Scot Stafford, Karen Dufilho, and Patrick Osborne in the studio

With the lyrics of the song in hand, we brought in Nicki Bluhm and Kelley Stoltz to perform it. To make it sound as authentic as possible, they recorded wherever the father and daughter sang or spoke in the video—in the car, on the sidewalk, in a park, and on the radio. The song carries the story but the sound design carries the experience, so it was important to get these details right. Through sound effects and foley (reproduction of everyday sounds that add realistic background noises to a film), we tried to make the audience feel like they were really inside the car with these characters.

Kelley Stoltz recording in the car

Cassidy Curtis, Technical Art Lead

Patrick’s vision for “Pearl” was that every prop, environment, and change in lighting would play a role in the narrative. This “story everywhere you look” approach meant that we’d have to create a huge number of assets, so their design needed to be relatively simple. We also knew that simplicity would help us engage the audience: it’s easier to empathize with less detailed characters. Production designer Tuna Bora created paintings that established the film’s visual style: a kind of “polygon impressionism” made of simplified shapes, with rich, expressive color palettes to convey the changing moods and seasons.

My job was to equip our team of digital artists with the tools to animate in this style. To do it, we had to think like painters. In a typical film or game, artists build objects, texture them, and let the computer illuminate them with realistic lights, which indirectly produce a color image. For “Pearl,” we turned that process upside down: we worked directly on final colors, using custom palettes for each scene. Instead of digital lights, we hand-crafted appealing shadow shapes and soft gradations of color. We roughened the edges, added blooms of light, and a final dusting of film grain to tie these elements together.

“Pearl” is just five minutes and 39 seconds long, but it’s packed with 14 characters, 26 different scenes and 38 shots. And with multiple versions of the film, it took a tremendous amount of behind-the-scenes magic to bring “Pearl” to life. Watch it now from our Spotlight Stories channel on the YouTube app, on Daydream through the YouTube VR app, on the Google Spotlight Stories app for iOS and Android, or on HTC Vive.

Delivering RCS messaging to Android users worldwide

24 February[ —]

Whether we’re receiving a boarding pass for a flight or chatting with friends and family, SMS (better known as text messaging) is a universal way for us to stay connected. But despite its ubiquity, SMS hasn’t evolved to take advantage of all the features that smartphones enable today. We believe it’s important to innovate in messaging standards, so we’ve been working with the mobile industry on an initiative to upgrade SMS through a universal standard called RCS (Rich Communications Services), bringing more enhanced features to the standard messaging experience on mobile devices. Today, we’re taking a significant step toward making RCS messaging universally available to users across the world with 27 carriers and device manufacturers launching RCS to Android users with Google.

Following our partnerships with Sprint, Rogers, and Telenor, today we’re announcing that Orange, Deutsche Telekom, and Globe are committed to launching RCS messaging powered by the Jibe RCS cloud from Google and will be preloading Android Messages (formerly called Messenger for Android) as the standard native messaging app for their subscribers. We’re also announcing that the Vodafone Group RCS service is supporting Android Messages and has already launched across 10 markets for Vodafone subscribers globally.

These partners have also committed to interconnecting through the Jibe RCS hub so that RCS messages are delivered to subscribers across carrier networks, helping RCS messaging become truly universal. We’re now partnering with carriers representing more than 1B subscribers worldwide.

Upgrading the default messaging experience for Android

We want to make sure that Android users can access all the features that RCS messaging offers, like group chat, high-res photo sharing, read receipts, and more. So we’re working with mobile device manufacturers to make Android Messages the default messaging app for Android devices. Mobile device brands LG, Motorola, Sony, HTC, ZTE, Micromax, HMD Global - Home of Nokia Phones, Archos, BQ, Cherry Mobile, Condor, Fly, General Mobile, Lanix, LeEco, Lava, Kyocera, MyPhone, QMobile, Symphony and Wiko, along with Pixel and Android One devices, will preload Android Messages as the default messaging app on their devices. With these partners, we’re upgrading the messaging experience for Android users worldwide and ensuring a consistent and familiar experience for users. We’ll continue to add more partners over time.


Android Messages supports RCS, SMS and MMS so people can message all their friends regardless of their network or device type. We’ll continue to update and improve Android Messages to bring new features enabled through RCS, such as the ability to search and share all types of content and easily access the messages that are most important to you.

Improving business messaging with RCS

Currently millions of businesses, service providers, and brands use SMS to communicate with their customers, whether they’re sending a bank fraud alert or package delivery notification. But while SMS provides a universal way for consumers to connect with businesses, the messages are limited to just plain text. RCS will upgrade today’s business messaging experience by enabling brands to send more useful and interactive messages. For example, a message from your airline reminding you to check in for a flight can now take advantage of rich media and interactivity to provide a full check-in experience, complete with boarding pass, visual flight updates, and terminal maps on demand, all directly within the messaging experience. Businesses can also have a branded messaging experience with information about the business and the ability to share content like images, video clips and gifs.

To make it easier for brands to participate in RCS business messaging, we’re creating an Early Access Program which will allow businesses to learn and build with the technology, influence the roadmap and standards, and be first to offer their customers an upgraded messaging experience. Virgin Trains, Walgreens, Amber Alert Europe, Baskin-Robbins, BlaBlaCar, FICO, Gamestop, G2A.com, IHG, LexisNexis Risk Solutions, Naturas, Papa Murphy’s, Philips, Porto Seguro, Sky, SONIC® Drive-In, Subway, Tailored Brands (Men's Wearhouse and Jos. A. Banks), Time Inc., and Uber are participating in the program in collaboration with messaging partners 3C, CLX Communications, Experian Marketing Services, MessageBird, mGage A Vivial Company, Mobivity, Movile, Vonage through the Nexmo API Platform, OpenMarket, Waterfall, and Zipwhip. Demonstrations of RCS business messaging will be showcased at Mobile World Congress next week in the Android Global Village and GSMA Innovation City, and you can learn more about it on our site.


We’re excited to see RCS messaging reach more people, and look forward to launching with more partners in the coming months. Learn more about our RCS solutions and the Jibe cloud platform on our site.

Celebrating Penpan Sittitrai, Thailand’s master of fruit carving

24 February[ —]

Today's Doodle in Thailand celebrates national artist Penpan Sittitrai and the delicate art of fruit carving, which she mastered, skillfully turning every fruit and vegetable she touched into something truly exquisite.

Penpan Sittitrai doodle

The tradition of fruit carving has been around for centuries, initially carried out to decorate the tables of the Thai royal family. Over time, it has turned into a staple at most cultural events — something would be amiss at a Thai wedding without one of these as a centerpiece. But it’s at Songkran, the Thai New Year festival, when this custom is especially popular.

Penpan Sittitrai

Penpai carving a mango (Image source: the family's private photo collection)

Penpan Sittitrai is Thailand’s most famous fruit carving artist. Using nothing but a simple carving knife, she shaped watermelons into delicate leaves and mangoes into elegant swans. Nature was Sittitrai’s favorite theme, and from girlhood through her golden years, Sittitrai practiced her craft, elevating it to a form of fine art.

Penpan left behind many legacies, including her book “The Art of Thai Vegetable and Fruit Carving,” so anyone, anywhere can learn how to turn their apple-a-day into a work of art.

Improve your nonprofit’s account security with 2-step verification

23 February[ —]

While online accounts allow nonprofits to easily communicate with partners, volunteers and donors across the world, this shared network can also leave your account vulnerable to intruders. As your nonprofit continues to grow its online presence, it’s crucial to keep confidential information (e.g., finances or donor’s information) safe. While passwords have historically been the sole guardian for online account access, research from Google has shown that many passwords and security questions can easily be guessed. That's why we strongly recommend that all nonprofits using GSuite for Nonprofits, or Google products like Gmail, use 2-Step Verification (2SV) as an additional protection on their account(s). 

Account hijacking—a process through which an online account is stolen or hijacked by a hacker—constitutes a serious threat to your nonprofit’s operations. Typically, account hijackings are carried out by phishing attempts or hackers who guess weak passwords. Because of this, it’s especially important for your nonprofit to maintain strong and unique account passwords to keep sensitive data safe.

But 2SV goes beyond just a strong password. It's an effective security feature that combines "something you know" (e.g., a password) and "something you have" (e.g., a text, a prompt, or a Security Key) to protect your accounts. Think of this like withdrawing money from an ATM/cash machine: You need both your PIN and your debit card.

Google Authentication app.png
Our free Google Authenticator app is available for Android and iOS devices, which generates a code for you each time you want to sign in to your account.

Now that you know what 2SV is, head over to our Help Page to start improving your nonprofit’s online security now. (Quick tip: Remember to keep your account settings up to date and configure backup options to use if your phone is ever lost or stolen). Stay safe, nonprofits!  

To see if your nonprofit is eligible to participate, review the Google for Nonprofits eligibility guidelines. Google for Nonprofits offers organizations like yours access to Google tools like Gmail, Google Calendar, Google Drive, Google Ad Grants, YouTube for Nonprofits and more at no charge. These tools can help you reach new donors and volunteers, work more efficiently, and tell your nonprofit’s story. Learn more and enroll here.

Doing more for racial justice

23 February[ —]

I'm the grandson of a Port of Seattle police officer, the nephew of a Washington State Trooper, and the son of a Snohomish County Detention Chief. The Black men in my family were all engaged in some form of law enforcement, and throughout my lifetime, I’ve seen law enforcement officers be a force for good in communities. But I’ve also borne witness to injustices that have shaken my faith in our criminal justice system. In my work at Google.org, I help identify causes and organizations that aim to ultimately help correct many of these injustices.

Since 2015, Google.org has committed more than $5 million to nonprofits advancing racial justice, and we’ve aimed to get proximate and better understand how racial bias can lead to exclusion from opportunity. Today we’re doubling our previous commitment, and investing $11.5 million in new grants to organizations across the country working to reform our criminal justice system.

Mass incarceration is a huge issue in the United States, and a major area of focus for our grants. The U.S. penal population has exploded, growing by 400 percent since 1984 to more than 2 million today, with Black men sentenced at over five times the rate of white men. We have the highest rate of incarceration in the world, dwarfing the rates of every developed country and even surpassing those in highly repressive regimes.

Videos of police shooting unarmed people of color have woken many of us up to the impact that racism and internalized bias have on black and brown communities. But we have almost no data on police behavior and criminal sentencing at a national level. Individual agencies and court systems keep track of some information, but aggregated reporting is nearly nonexistent and the data is often not complete enough to identify potential bias. Each agency collects and reports data in their own way, making comparisons between jurisdictions nearly impossible.

MeanUse Force Rates.png
The average rate of police use of force for Black residents is 2.5 times as high as the overall rate and 3.6 times as high as the rate for White residents (Source: CPE’s report The Science of Justice)

We believe better data can be can be part of the solution, which is why we’re investing in organizations using data and evidence to reduce racial disparities in the criminal justice system. We’re giving $5 million to support the Center for Policing Equity (CPE), which partners with police agencies and communities by bringing together data science, trainings and policy reforms to address racial disparity. This intersection gives CPE a unique opportunity to both identify the cause of problems, and propose concrete solutions. CPE’s National Justice Database is the first in the nation to track national statistics on police behavior, including stops and use of force, and standardizes data collection across many of the country’s police departments. Soon, Google engineers will be volunteering their time and skills with CPE to help build and improve this platform.

We’re also supporting two organizations in California that are focused on ways that data can help bring about more equity in our court systems. Our $1.5 million grant to Measures for Justice aims to create a first-of-its-kind web platform that allows anyone to get a snapshot of how their local justice system treats people based on their offense history and across different categories of race/ethnicity, sex, indigent status and age. And $500,000 to the W. Haywood Burns Institute is helping to ensure this data across each of California’s 58 counties is accessible to criminal justice reform organizations so they can make data-informed decisions.

The goal of these efforts is a society where everyone, regardless of race, is ensured an equal outcome under the law. That’s why we’re also supporting Impact Justice with $1 million for their national Restorative Justice Project, an effort that aims to keep 1,900 youth, primarily youth of color, out of the juvenile justice system. And a $650K grant to JustLeadershipUSA will support their efforts to train a growing national network of formerly incarcerated leaders from across the country to lead reform efforts at the local, state and national level. We’ve also reinvested in organizations working to provide services to people who were formerly incarcerated and their communities like Defy Ventures, Center for Employment Opportunities, Silicon Valley De-Bug and Code for America.

We believe that these 10 organizations can create meaningful change around racial, social and criminal justice in the U.S., and we hope that our grants will provide resources and support to bring about this much-needed change. But there’s also a lot going on here at Google in the fight for justice and against bias. Our Black Googler Network (BGN) is a group of employees that fosters success in the Black community at Google and beyond. They lead mentorship programs and events, and have also driven social justice movements across the company, including solidarity for #BlackLivesMatter. Last November, the BGN held Google’s first ever “Blackout Week,” where they helped raise $70,000 for four organizations leading the Movement for Black Lives, and helped sponsor community screenings of the Netflix documentary “13th” in 12 Google offices across the country.

GoogleOrg Screening.jpg
Google.org’s community screening of the documentary 13th at Google’s Mountain View HQ  

A person's race should not determine how they are treated by the law. We’re proud to support these organizations, and we hope that their focus on data and community-driven solutions to will bring us closer to a more just society.

How Google Maps APIs are fighting HIV in Kenya

23 February[ —]

In 2015, the Joint United Nations Programme on HIV/AIDS (UNAIDS) and mobile analytics solutions provider iVEDiX  came together to create the HIV Situation Room, a mobile app designed to help fight the HIV epidemic in Kenya. The app uses Google Maps APIs to create a comprehensive picture of HIV prevention efforts, testing and treatment — and make this programmatic data accessible both to local staff in clinics and others on the front lines, as well as to policy makers.

We sat down with Taavi Erkkola, senior advisor on monitoring and evaluation for UNAIDS, and Brian Annechino, director of government and public sector solutions for iVEDiX, to hear more about the project and why they chose Google Maps APIs to help them in the fight against HIV.

How did the idea for the UNAIDS HIV Situation Room app come about?

Taavi Erkkola: As of 2015, UNAIDS estimates a total of 36.7 million people living with HIV globally. Of those, 2.1 million are newly infected, with approximately 5,700 new HIV infections a day. Sixty-six percent of all infected by HIV reside in sub-Saharan Africa, and approximately 400 people infected per day there are children under age 15. To effectively combat HIV, we need access to up-to-date information on everything from recent outbreaks and locations of clinics, to in-country educational efforts and inventory levels within healthcare facilities. UNAIDS has a Situation Room at our headquarters in Geneva that gives us access to this kind of worldwide HIV data. But we wanted to build a mobile app that provided global access to the Situation Room data, with more detail at a national, county and facility-level.

We tested out the app in Kenya because the country has a strong appetite for the use of technology to better its citizens’ health. Kenyan government agencies, including the National AIDS Council, encouraged organizations like Kenya Medical Supplies Authority (KEMSA) and the Ministry of Health to contribute their disease control expertise and data to the Situation Room solution. Kenya's President Uhuru Kenyatta was an early advocate, and has demonstrated his government’s commitment to making data-driven decisions, especially in the fight against HIV and AIDS.


Why did UNAIDS and iVEDiX choose Google Maps, and how did you use Google Maps APIs to build the HIV Situation Room app?

Brian Annechino: In Kenya, more than 80 percent of adults own a cell phone, and Android is by far the most popular operating system. Google Maps APIs are available across all platforms, including native APIs for Android, and Google Maps also offers the kind of fine-grained detail we needed — for example, the locations of more than 7,500 Kenyan healthcare facilities servicing the HIV and AIDS epidemic. Using data from multiple sources along with Google Maps, we can map things like a clinic’s risk of running out of antiretroviral medicine.

Onix, a Google Premier Partner, identified the right Google Maps components to build the app and helped us procure the licensing we needed. We used the Google Maps Android API to build the main interface. Since it was important to have the most accurate and up-to-date map data for Kenya to support the effort, we used the Street View feature of the Google Maps Android API to let people zoom into the street level and see clinics that offer HIV services in locations where Street View imagery is available.

TE: These mapping capabilities are critical because we need to give our county-level users as much insight as possible on service delivery at health facilities. Decision-makers in HIV response are at national and county-level. In this app, we’re able to combine multiple data sources to get a more comprehensive picture of HIV prevention efforts, testing and treatment across these levels.

What kind of data does the HIV Situation Room app display?

TE: The app taps into three data sources. The first is UNAIDS data set about country-by-country HIV estimates. The second is Kenya’s District Health Information System, which has detailed information from all 47 Kenyan counties — everything from the number of people treated at a specific hospital for HIV, to the number of HIV+ pregnant women attending clinics for visits, to the number of condoms distributed by each facility. The third data set will include community level data, which can also contain survey responses from clients about the quality of service they receive.


How does the HIV Situation Room use the data?

TE: By overlaying our inventory data and field notes on a map, we can see patterns and identify trends that help us respond quickly and plan efficiently. For example, if we see breakouts occurring in a particular area, we can monitor HIV test kits in that area or increase educational efforts for target communities.

Have you seen signs that your efforts are making a difference in Kenya?

TE: One of our biggest successes in Kenya is that the app is used by the highest-level decision-makers in the country — President Kenyatta uses the app — as well as people on the front lines fighting HIV, such as program managers. Using the app, policy makers have more information than ever before, and as a result, are able to devise more effective solutions by combining insights at the local and program coordination levels. We see it as an extremely powerful tool for fighting HIV — and we’re looking to bring this tool to other countries in Africa soon.

When computers learn to swear: Using machine learning for better online conversations

23 February[ —]

Imagine trying to have a conversation with your friends about the news you read this morning, but every time you said something, someone shouted in your face, called you a nasty name or accused you of some awful crime. You’d probably leave the conversation. Unfortunately, this happens all too frequently online as people try to discuss ideas on their favorite news sites but instead get bombarded with toxic comments.  

Seventy-two percent of American internet users have witnessed harassment online and nearly half have personally experienced it. Almost a third self-censor what they post online for fear of retribution. According to the same report, online harassment has affected the lives of roughly 140 million people in the U.S., and many more elsewhere.

This problem doesn’t just impact online readers. News organizations want to encourage engagement and discussion around their content, but find that sorting through millions of comments to find those that are trolling or abusive takes a lot of money, labor, and time. As a result, many sites have shut down comments altogether. But they tell us that isn’t the solution they want. We think technology can help.

Today, Google and Jigsaw are launching Perspective, an early-stage technology that uses machine learning to help identify toxic comments. Through an API, publishers—including members of the Digital News Initiative—and platforms can access this technology and use it for their sites.

How it works

Perspective reviews comments and scores them based on how similar they are to comments people said were “toxic” or likely to make someone leave a conversation. To learn how to spot potentially toxic language, Perspective examined hundreds of thousands of comments that had been labeled by human reviewers. Each time Perspective finds new examples of potentially toxic comments, or is provided with corrections from users, it can get better at scoring future comments.

Publishers can choose what they want to do with the information they get from Perspective. For example, a publisher could flag comments for its own moderators to review and decide whether to include them in a conversation. Or a publisher could provide tools to help their community understand the impact of what they are writing—by, for example, letting the commenter see the potential toxicity of their comment as they write it. Publishers could even just allow readers to sort comments by toxicity themselves, making it easier to find great discussions hidden under toxic ones.


We’ve been testing a version of this technology with The New York Times, where an entire team sifts through and moderates each comment before it’s posted—reviewing an average of 11,000 comments every day. That’s a lot of comments. As a result the Times has comments on only about 10 percent of its articles. We’ve worked together to train models that allows Times moderators to sort through comments more quickly, and we’ll work with them to enable comments on more articles every day.

Where we go from here

Perspective joins the TensorFlow library and the Cloud Machine Learning Platform as one of many new machine learning resources Google has made available to developers. This technology is still developing. But that’s what’s so great about machine learning—even though the models are complex, they’ll improve over time. When Perspective is in the hands of publishers, it will be exposed to more comments and develop a better understanding of what makes certain comments toxic.

While we improve the technology, we’re also working to expand it. Our first model is designed to spot toxic language, but over the next year we’re keen to partner and deliver new models that work in languages other than English as well as models that can identify other perspectives, such as when comments are unsubstantial or off-topic.

In the long run, Perspective is about more than just improving comments. We hope we can help improve conversations online.

Gboard for iPhone gets an upgrade

23 February[ —]

In May  2016, you first met Gboard, our app that let you search and send information, GIFs, emoji and more, right from your keyboard. In July, Gboard went global. And today we’re upgrading your Gboard experience on iPhone by adding access to 15 additional languages, Google Doodles, new emoji, and—by popular demand—voice typing.

New languages and new emoji

Gboard will now work in Croatian, Czech, Danish, Dutch, Finnish, Greek, Polish, Romanian, Swedish, Catalan, Hungarian, Malay, Russian, Latin American Spanish and Turkish. To get typing, searching and sharing in these new languages, open the Gboard app and go to “Languages” > “Add Language.”

We’ve also increased support for the universal language—emoji. Now you can search and send all of the latest emoji from iOS 10.]4

Google Doodles

Doodles are one of the Googley-est things about Google. These fun animations honor holidays, anniversaries and notable people, and often teach you about a little slice of history. Now you can access them right from Gboard. On days when there’s a Doodle, you’ll see the “G” button animate, cuing you to  quickly tap to open up the day’s Doodle and search for more information about it.

Say it faster

With today’s update, we’ve added voice typing, which allows you to dictate messages directly to Gboard. To tee up your next text, just long press the mic button on the space bar and talk.

To enjoy these updates to Gboard for iPhone, head to the App Store and make sure you’re running the latest version of the app. We’re always working on new features and languages, so please keep sharing your feedback in the app store—we’re listening!

What do productivity, machine learning and next generation teams have in common? Google Cloud Next ‘17.

22 February[ —]

On March 8-10, Google will host one its largest events ever — Google Cloud Next 2017. In the last year, the Google Cloud team has introduced some new products and solutions to help businesses face some of their biggest productivity problems. Next is our way of bringing together customers and partners under one roof to see the results of all these updates. That includes the latest cloud innovations and more than 200 sessions, where you can check out new products and features firsthand.

While I applaud anyone who figures out a way to attend all 200, there are a few sessions that you should definitely see if you want ideas to help boost your team’s productivity.

One that comes to mind is the Building stronger teams with team-based functionality session. Think about when you work on a project at home. Now think about how you work on a project at work. Do you find that your work’s success depends on a team of people rather than one person? Most would say yes. Yet, historically, productivity tools have focused on helping individuals get more done — like how you manage your inbox or tackle your to-do list. Since we rely on teams to successfully complete tasks, we need tools to help that group be more productive as a whole. It’s a new concept, and I’m excited that this session will share some of the early work that we’re doing to move beyond individual productivity to, instead, use technology to help entire teams achieve more.

Businesses hear all the time about how machine learning can have a positive impact, and many are interested to see how they can achieve that same impact for their companies. Fortunately, Google has always been at the forefront of machine learning technologies like computer vision, predictive modeling, natural language processing and speech recognition.

To that end, I recommend checking out Machine learning powering the workforce: Explore in Google Docs to see how machine learning in G Suite can instantly help you tackle everyday tasks and complex business challenges with the click of a button. Then, follow that up with Introduction to Google Cloud Machine Learning to learn how you can build your very own custom business applications on Google Cloud Platform (GCP).

Whether it's using the Sheets API to give project managers using Asana a way to do deeper comparison of their projects, or using the Slides API to create a deck in Slides from a Trello board in just one click, the ways in which our customers and partners are automating their processes using G Suite APIs are impressive (and growing). The APIs we’re building across G Suite, as part of the larger Cloud platform, are being tailored to solve the most common business flows and the Automating internal processes using Apps Script and APIs for Docs editors session shows how some folks are already using Apps Script to make their internal processes hum.

These are the sessions that excite me, but you can find the sessions that excite you in the full Next '17 agenda. And if you’re wondering, you can still register. Grab your spot and I’ll see you there!

Google Cloud supports $3M in grant credits for the NSF BIGDATA program

22 February[ —]

Google Cloud Platform (GCP) serves more than one billion end-users, and we continue to seek ways to give researchers access to these powerful tools. Through the National Science Foundation’s BIGDATA grants program, we're offering researchers $3M in Google Cloud Platform credits to use the same infrastructure, analytics and machine learning that we use to drive innovation at Google.

About the BIGDATA grants

The National Science Foundation (NSF) recently announced its flagship research program on big data, Critical Techniques, Technologies and Methodologies for Advancing Foundations and Applications of Big Data Sciences and Engineering (BIGDATA). The BIGDATA program encourages experimentation with datasets at scale. Google will provide cloud credits to qualifying NSF-funded projects, giving researchers access to the breadth of services on GCP, from scalable data management (Google Cloud Storage, Google Cloud Bigtable, Google Cloud Datastore), to analysis (Google BigQuery, Google Cloud Dataflow, Google Cloud Dataproc, Google Cloud Datalab, Google Genomics) to machine learning (Google Cloud Machine Learning, TensorFlow).

This collaboration combines NSF’s experience in managing diverse research portfolios with Google’s proven track record in secure and intelligent cloud computing and data science. NSF is accepting proposals from March 15, 2017 through March 22, 2017.  All proposals that meet NSF requirements will be reviewed through NSF’s merit review process.

GCP in action at Stanford University

To get an idea of the potential impact of GCP, consider Stanford University’s Center of Genomics and Personalized Medicine, where scientists work with data at a massive scale. Director Mike Snyder and his lab have been involved in a number of large efforts, from ENCODE to the Million Veteran Program. Snyder and his colleagues turned to Google Genomics, which gives scientists access to GCP to help secure, store, process, explore and share biological datasets. With the costs of cloud computing dropping significantly and demand for ever-larger genomics studies growing, Snyder thinks fewer labs will continue relying on local infrastructure.

“We’re entering an era where people are working with thousands or tens of thousands or even million genome projects, and you’re never going to do that on a local cluster very easily,” he says. “Cloud computing is where the field is going.”

“What you can do with Google Genomics — and you can’t do in-house — is run 1,000 genomes in parallel,” says Somalee Datta, bioinformatics director of Stanford University’s Center of Genomics. “From our point of view, it’s almost infinite resources.”

0 | 10

mirPod.com is the best way to tune in to the Web.

Search, discover, enjoy, news, english podcast, radios, webtv, videos. You can find content from the World & USA & UK. Make your own content and share it with your friends.

HOME add podcastADD PODCAST FORUM By Jordi Mir & mirPod since April 2005....