High quality metadata plays significant role in improving enterprise search results. Enabling product search and discovery based on visual attributes makes it more intuitive and effective. However it requires significant effort as such attributes usually don’t exist in generic text tags provided with products, they have to be assigned based on product images.
Using Computer Vision to automate this process is a logical choice, but it has its challenges. Before the ‘brain’ can be put to work it has to be trained. The more training data is provided the better it will perform, not enough data and results can be unpredictable. Creating training datasets is expensive process as it involves manual labour.
The purpose of this project was to find the way to reduce the need in massive training datasets. The AI model we develop has understanding of human body parts, posture and orientation and needs much less training data and less number of product’s images to achieve accurate tagging.
The below video demonstrates how the AI model works out neckline and sleeve types and calculates color vector for fashion items on upper body.
Skip to relevant parts of the video by selecting below.
This AI case study has been conducted in Australia, Melbourne by FifthOcean and Nola, a foot traffic counter & visitor analytics solution for retailers, venues & events.