commercially safe AI

The rise of commercially safe AI services and applications

One of the most interesting announcements to come out from CES was the launch of a new iStock “commercially safe” AI image generation service based on the Nvidia Picasso platform. It provides customers with a text-to-image generation tool trained on the company’s library of licensed, proprietary data to create ready-to-license visuals, with legal protection and usage rights for generated images included.

Perhaps OpenAI should take a close look at what they can learn from the launch of this new service. Leaving aside the legal arguments the organization is having with the New York Times over the definition of fair use, OpenAI needs to recognize that the wild west days of generative AI are on the wane.

If enterprise customers are going to deploy generative AI applications and services at scale, they will demand the same “commercially safe” legal protections and usage rights provided by iStock as a minimum requirement for any third-party vendors they engage with. By the same token, owners of data used for training by major generative AI services will take increasingly stringent steps to protect their IP through legal or technical means.

Far from impeding innovation, this will spur the development of a new wave of compact and efficient SLMs (Small Language Models) trained on clean datasets that are customized to meet specific deployment and operational needs ranging from preventing forklift pedestrian accidents to automating hotel guest check-in procedures.

The proliferation of on-device AI in smartphones and PCs will further accelerate this trend by reducing latency, enhancing data privacy and security, and increasing customization and personalization.

Whether it is an external image generation service or internal safety and productivity application, the possibilities for “commercially safe” generative AI deployments are simply enormous.

Scroll to Top