Marketing
01 November 2022
Instagram launches Product Tagging API for Reels
Product tagging was made available to all US users in Feed earlier this year.

Photo by Alexander Shatov on Unsplash
Product tagging was made available to all US users in Feed earlier this year.
Instagram is launching new developer tools that expand product tagging in the platform.
The Meta-owned company said Monday it is bringing the Product Tagging API to Reels, which is Meta's short-form video format.
“Enabling product tagging via the Instagram API reduces product tagging friction by meeting sellers where they are directly in their workflows,” the company wrote. “And, using product tagging in Reels allows brands to drive product discovery with engaging short-form video while responding to product trends and embedding their brands in culture.”
The feature is integrated with partners including Dash Hudson, Hootsuite, Later, Sprout Social, and Sprinklr. It can also be integrated by any content publishing API partner.
This comes after the platform rolled out product tagging to everyone using the platform in its feed, which includes photos and carousel posts, earlier this year. Shopping expanded to Reels in 2020.
Brands and other merchants can add product tags for specific items that are featured in posts. Clicking on the tag takes users to a product detail page within Instagram. Purchases can be made directly within the app, or on a store website.
Instagram has been emphasizing Reels as it seeks to grow short-form video and AI-powered discovery to compete with TikTok. Reels plays on Facebook and Instagram increased 50% from six months ago and were incremental to time spent on the app, Meta CEO Mark Zuckerberg said on the company’s recent Q3 earnings call.
In turn, Instagram is reconfiguring its approach to shopping this year. As The Information reported last month,the app's team is testing removal of the Shop tab as it seeks to create a commerce model that is more centered around advertising, and less around direct purchases from in-app Shops.
Monday's expansion sends the latest reminder that product tagging has remained a part of the strategy despite the changes in recent years.
New tools from Adobe and Levi's generate product images in multiple variations.
An AI-generated model. (Photo courtesy of Levi's)
AI is at the top of the conversation across technology in 2023, as new models such as ChatGPT and GPT4 show how ingesting and training large amounts of data can not only help businesses find insights in what already exists, but create something new.
While there is plenty of off-hours time being devoted to experimentation with these new models, the uses of tools that bring together automation and creativity in a way that is valuable for businesses and embraced by their customers are still coming into view.
In ecommerce, the promise of AI appears to be massive. Across marketplaces, advertising and customer service, brands and retailers have seen demands for content and customer touchpoints grow exponentially. With executives constantly in search of efficiency, AI tools stand to help generate voluminous content at scale.
To be sure, it remains early days. AI has not yet arrived permanently in ecommerce workflows, and some of the tools that lead to it arriving may not use the same models that are gaining so much press today. But the teams inside brands and retailers are interested in experimenting with this technology, and pilots can offer hints at where it might be heading.
This week delivered a pair of new launches from Levi’s and Adobe that showed how new tools can help to change how product images are generated. Let's take a look:
Photoshoots featuring models wearing products in real-world settings have long been a staple of the marketing playbook in fashion. Levi’s is piloting a new approach that could bring AI into the equation.
Through a partnership with Amsterdam-based Lalaland.ai, Levi Strauss is planning to test the use of customized, AI-generated models. Designed to supplement human models, Lalaland.ai’s technology is built to help show products for a diverse range of body types, ages, size and skin tones.
Levi’s positioned this as a move to supplement human models. Typically, a product page on the Levi’s app or website only has one model. Using this technology to create multiple images can help create a way for customers to see themselves represented in the products that are shown. It can also help to increase diversity, equity and inclusion within Levi's ecommerce stores, the company said.
“While AI will likely never fully replace human models for us, we are excited for the potential capabilities this may afford us for the consumer experience,” said Dr. Amy Gershkoff Bolles, global head of digital and emerging technology strategy at Levi Strauss & Co., in a statement. “We see fashion and technology as both an art and a science, and we’re thrilled to be partnering with Lalaland.ai, a company with such high-quality technology that can help us continue on our journey for a more diverse and inclusive customer experience.”
A new tool from Adobe is also aiming to automate the work of showing images in multiple variations on ecommerce stores, but this goes beyond fashion.
According to Reuters, the new tool is designed to allow ecommerce brands and retailers to create 3D images that show products from a range of categories in a variety of formats and configurations, as well as settings. It can be used for home goods, toys, furniture, apparel and more. Product images are used across a range of content, from product pages to emails to social campaigns. So the content needs for brands and retailers are voluminous. From Reuters:
But even keeping up with making renderings has created a tremendous amount of work for e-commerce companies as marketing campaigns have become more tightly targeted, said Francois Cottin, senior director of marketing for Adobe's Substance 3D business.
For example, Cottin said, a company selling a coffee machine might want to show the gadget against a different background in different countries, because German kitchens might look different from California kitchens. Most companies have to tap 3D artists to create each image.
This advance is as much about transforming the work of teams as it is about creating the variations of product images themselves. 3D models are currently used by many of the large ecommerce players, but creating them remains the work of large teams with specialty skills in visual effects. The images then head to the hands of marketers and merchandisers who find a home for them on product pages within the customer experience.
Automation can help make all of this work more efficient. Such a tool could also have a huge impact on smaller brands and retailers. If these capabilities move into wider release, a pool of content that would’ve only been available to the most-resourced companies now could be opened up for everyone to use.
While Adobe typically works with enterprises and this product is likely designed for that market segment, its release presents a question worth asking for the future: Will AI be the next ecommerce advance that further levels the playing field between storied brands and fast-rising startups?