So let me show you the Translate API in action. One way to see it is to go to cloud.google.com/translate, and you'll want to scroll down on the page just a little bit, and then you can try out the API right here. So imagine that I'm getting ready for our trip to Japan, and I want to look up the word for "hotel" in Japanese. So I'm actually going to start in English here, and just type in the word "Hotel". Then you can see very quickly, it shows me the Japanese translation. But what about translating a group of sentences? Sure we could invoke the API, but let's see how you can use the API right in Google Sheets. So I created this example here of showing how to translate greetings into different languages, and you can do this very easily with a simple formula. So all you need to do is in your cell in Google Sheets, type in googletranslate, and you can specify what you want to translate here. So I'm going to select this cell, then you need to indicate what language the information is in, so we'll put English, and then what language you want to translate it to. So here, I will type in "ko", which is the code for Korean. You can see, very quickly, it translates it. What makes this even better is now I can just copy this formula down into the cells below, and it automatically is translating those other phrases into Korean. Let's do the same with Japanese. So again, all I need to do is type in the formula googletranslate, select the content I want to translate, and then specify the language that the current text is in and what I want to translate it to. So here, you can see I'm typing "ja" for Japanese. Again, I can easily copy that information down and get several translations with just a few clicks. So you saw a demo with text, what about images? Let me walk you through what the Vision API can do. Label detection. The API can detect broad sets of categories within an image, ranging from modes of transportation to animals. Face detection. The API can detect multiple faces within an image, along with the associated key facial attributes like emotional state or wearing head wear. OCR. The API can detect and extract text within an image with support for a broad range of languages. Explicit content detection. We can detect explicit content like adult content or violent content within an image. Landmark detection. The API can detect popular natural and man made structures within an image. Logo detection. We can detect popular product logos within an image. Here's an example of what the JSON response looks like for face detection. It's a picture a Googler on our team took with two other teammates on a trip to Jordan. It returns an object for each face found in an image. You can see as part of the face annotations, some pretty cool attributes like headwearLikelihood and joyLikelihood. The Vision API can also provide annotations on the image by looking up where and in what context this image or visually similar ones have appeared on the web. Take the image of this car which may not be immediately recognizable to some of you, but to others and to Cloud Vision API through web annotations, it's identified as the flying car from the Harry Potter series. Take a minute and try Cloud Vision yourself directly in your browser. Navigate to cloud.google.com/vision and scroll down to try the API by uploading your own image. Another pre-built ML solution is the Translate API, which underlies the products shown above. Simply place your camera over a sign, and it gets auto translated for you. This is a combination of Vision API to do optical character recognition, and Translate API to do actual translation. Vision API supports 90 plus languages, can detect the language of the source text, and is highly scalable. You can try this one on the web, too. Here's an example of a GCP customer who uses the Cloud Natural Language API. Wootric is an ML-driven customer feedback platform that helps businesses improve their customer service. They collect millions of free text customer survey responses each week. They use the Natural Language API to automate the text processing and sentiment analysis. In this visual, you see the volume of the feedback on the vertical axis and the sentiment on the horizontal axis. Lastly, the coloring of the circles indicates which bucket of feedback that response was automatically classified into, like usability feedback or pricing feedback. This allows Wootric and similar organizations to intelligently route and prioritize customer feedback in real time. You can try out the pre-built model and get a sentiment score for your own free text using the link provided. Note that you'll get a sentiment score, how positive or negative that text was, as well as a magnitude, which indicates how intense of a feeling. Often, multiple pre-built models are used together in an ML system. For example, say if you didn't have free text comments for user reviews, but rather had audio from your customer interactions with your call centers. If you wanted to get the sentiment of the customer's conversation, you could first transcribe the audio to text with the Cloud Speech API. Then, use the Natural Language API for sentiment as you saw before. One last pre-built model you can leverage is the Video Intelligence API, which is similar to the Cloud Vision API, except for video instead of images. Here, the API can identify labels within a video and when they occurred, as well as a confidence level. Here, you see at one minute 14 seconds in the video, the model has 96 percent confidence that this frame is a bird's eye view. One real customer use case for this API is for film companies who are looking to target and recommend movies to an audience based on similar movie trailer watching behavior. These companies run all their movie trailers through the Video Intelligence API to label key features of the trailer, like rugged, or outer space, or wild west, and then can programmatically recommend similar movie trailers based on common themes.