Using ML5 and its image classification model and approach, I implemented this little experiment.
The sketch is recognizing and labeling images from the live input camera in your laptop. At the same time is sending queries with these labels to an API that responses with URL if multiple images. You can clearly see how the background image change accordingly with the objects you out in front of the camera.
Let’s imagine this feature implemented in a museum where you are able to make your own questions visually. You are looking for a painting with a pose, or a particular object or a particular feature and you are able to request this with images.