How to price AI bots for Enterprise IT Management?

The popular pricing models for Enterprise IT Management software are pay-per-managed-instance, pay-per-seat, or pay-per-GB-of-collected-metrics. The goal of AI-based automation is to improve productivity of human Ops. Are the existing pricing models accurately reflecting the value delivered by AI-based solutions to Enterprises? Also, are these pricing models feasible for the software vendor? This post explores the topic of pricing solutions for AI-based Enterprise IT Management.

The nirvana for Enterprises is to make IT costs completely on-demand, correlated directly to business revenue. The industry has been evolving towards this ideal goal with the growing adoption of SaaS and Cloud computing. Pay-as-you-go pricing is a de-facto today for Enterprise IT operations in the cloud. Even for on-premise deployments, perpetual license models are being forced to become more usage-oriented. The logical progression for pay-as-you-go models is an even fine grained unit of consumption — AWS Lambda, Azure Functions, GCP Cloud Functions, are examples of a pay-per-execution model where the pricing is based on number of invocations of a function — simply put, a serverless model where the customer does not pay for a running compute instance that is idle.

The popular pricing models for Enterprise IT Management software are pay-per-managed-instance, pay-per-seat, or pay-per-GB-of-collected-metrics. In the context of AI-based automation, each of these models have limitations:

  • Pay-per-seat: This pricing model is linked with the number of human Ops using the software. By definition, this model contradicts the value proposition. As productivity improves, the need for human Ops should reduce, which reduces the number of seats required — by adding intelligence for productivity improvements, software vendors will actually reduce their license revenue! This model is broken.
  • Pay-per-managed-instance: Traditionally, there was a direct correlation between the size of the cluster, and the amount of human effort required for management. Given the changing landscape of IT automation, this correlation is becoming increasingly questionable. For instance, using puppet/chef for deployments, the effort to deploy a single instance versus 1000 instances is only incremental. While this pricing model is feasible for software vendors, my personal experience is that customers perceive as over-paying using this model.
  • Pay-per-GB-of-collected-metrics: This model is being successfully used for solutions that collect logs and metrics for analysis. Customers consider the pricing intuitive since it correlates to the costs involved in persisting the monitored data, and effort for analysis. While this is a good model for data analysis automation, it becomes less intuitive for complete AI-based automation that includes analysis, optimizing and planning across known solutions, and providing a recommendation. Planning and optimization techniques are significantly more expensive to develop, and software vendors will not be fully compensated if they only rely on the pay-per-GB model.

The right pricing model is one of the key elements of the startup discovery process — should it be cost-based, value-based, or competition-based? Today, most startups adopt competitive pricing models, since customers are already familiar, and tend to be less questioning (one less adoption friction to overcome!). To conclude this post, we provide our experiences on cost- and value-based pricing given our experiences developing a AI-based service as well as interactions with customers:

  • Cost-based Pricing Strategy (i.e., what it takes for software vendors): AI-based systems typically create deep neural networks or similar requiring heavy compute resources. The machine learning model can be applied for a customer cluster deployment of 10 nodes or 1000 nodes. The base cost of creating such models are significant, with the cluster size adding only a very incremental overhead. As such, pricing based on number of instances aligns well with cost-based pricing only for larger customer deployments (sweet-spot of deployment size will vary.
  • Value-based perspective (i.e., what the customer gets): The key metric for the customer is Ops productivity. Ideally, if a specific task takes a human 2 hours, the AI-bot can a charged on a per-operation basis, as some fraction of the human cost. But, defining an intuitive benchmark for human-time is non-trivial. Alternatively, the pricing can be structured as recruiting bots on the Ops team. Similar to humans, bots can vary in proficiency between apprentice versus an expert. For instance, for a capacity planning task, an apprentice bot can analyze historical load patterns to provide statistical distributions, while an expert bot can not only provide the distributions, but can optimize across all known alternatives and provide a recommendation. Software vendors can price the apprentice and expert bots differently (essentially a feature-based tiered model, but hopefully more intuitive).

Have other ideas? Put them in comments? Lets see whats sticks moving forward!

Democratize Data+AI — real-world battle scars to help w/ your journey. Product builder(Engg VP) & Data/ML leader (CDO). O’Reilly Author. AIForEveryone.org