Earlier in the course, you built custom models with BigQuery ML using columns of data as features and predicted the value of your labels. But what about unstructured data? How does ML learn the features? It's best illustrated with a quick example. What do you see here? It's a cat. But, how do you know? Your eyes have the benefit of many years of evolution and intuition to allow you to perceive and interpret those pixels on the screen. How could we teach a machine to understand that this particular collection of pixels is a cat? If you let yourself fall back into rules making, you might say look for cat-like eyes in the images. What about this image? Your brain still knows it's likely a cat, but the machine now has no basis to go off up with our old rule of "look at the eyes." Okay. What if we added a bunch more hard-coded rules? Like look for eyes, ears, and a nose. So is this still a cat? What about this? Again, hard coding rules completely fails us here. That's where deep learning comes into play. As this Wired article states, Iif you want to teach a neural network to recognize a cat, you simply show it thousands of photos of cats and let it figure it out." This is similar to how we teach children to recognize and classify new objects. In 2012, that's exactly what the Google Research team with Jeff Dean and Andrew Ng did. What you see here, is what the deep learning neural network figured out what a cat is based on looking at over 10 million images and processing the model on over 16,000 computers. Now, a familiar architecture for deep learning is the neural network, which is a model inspired by our own human brain. Here, it takes the input image and classifies it as a cat or a dog. Again, we're not telling the model to focus on looking for dark colors or cat whiskers. It builds its own recipe for determining the correct label to apply at the end. As you can see from the image, modern ML models can scale and handle even tricky data points like the dog hiding in a laundry basket. Machine-learning, specifically deep learning, is at the core of many Google products like Google Photos, which can classify and group photos like photos of your pets together in an album. Note that this is now multiple machine-learning models in one. First, it needs to identify whether the photo you just took is of a dog. Second, it needs to identify whether this dog is your pet based on comparing this photo to the history of photos you have with that specific dog. Deep learning, remember, that's a sub-discipline of machine learning, is incredibly useful when we as humans can't even map out our intuition about what makes a prediction correct or not. That's where we just show the computer at 10,000 images of a cat and hope it figures it out. With recent leaps in ML research at Google, Computer Vision has come a very long way