Pricing Science FAQ’s with Zilliant VP of Science & Analytics Lee Rehwinkel
By Lee Rehwinkel
Apr 24, 2020
Table of Contents
- Pricing Data Science FAQ No. 1: How do I know if the solution is “black box”?
- Pricing Data Science FAQ No. 2: How much data should a pricing solution be able to consume?
- Pricing Data Science FAQ No. 3: What is the most important characteristic of a machine learning solution?
- Pricing Data Science FAQ No. 4: What goes into a price elasticity calculation?
- Pricing Data Science FAQ No. 5: Is it true that pricing models can’t use loss data?
Pricing data science, price optimization, dynamic pricing, price elasticity – if you’re familiar with the world of pricing in B2B – you’ve likely heard all these terms. If seeing is believing, then the tricky part is parsing fact from fiction when it comes to pricing science.
To help you spot the real thing, we recently sat down with Lee Rehwinkel, Vice President of Science and Analytics at Zilliant. Lee has been with Zilliant for nine years and is frequently on the road, and lately on Zoom or Webex, speaking with customers and business leaders to answer their B2B pricing science questions. Below are the questions he runs into most frequently.
Pricing Data Science FAQ No. 1: How do I know if the solution is “black box”?
In the past, the perception of price optimization as a black box was quite prevalent. However, there has been a broad shift in the market that is giving users complete transparency. Even though the core science behind the scenes hasn’t changed, our customers have requested increased access and visibility. As a customer, you can see precisely how the pricing model is configured, get your hands on the model and even configure it yourself. For example, our Segmentation Management feature allows our customers to see how segmentation attributes are derived, including the formulas and filters behind the scenes. Modern pricing science should provide a high level of functionality in an understandable way and give the user the ability to pull the right levers to adjust the model as needed.
Pricing Data Science FAQ No. 2: How much data should a pricing solution be able to consume?
In short? Enormous amounts. We’ve made significant investments with technologies like Amazon Redshift, that makes it possible to bring in incredible amounts of data and handle it quickly. We continue to see substantial increases in each of the 3 V’s of big data: volume, velocity and variety. Today, we incorporate far more than just order line and product master data. Very often we are taking advantage of competitive scraped data, commodity indices, eCommerce data, inventory data … any data you need to make the best, smartest pricing decisions possible. Our strategic investments in how we consume, ingest and store data within our solution have driven a 20x improvement in run performance. For example, if data took 20 hours before to consume, prepare and process, it now takes under an hour. Even order line datasets that exceed a billion records pose no issues.
Pricing Data Science FAQ No. 3: What is the most important characteristic of a machine learning solution?
Any machine learning solution needs to be highly flexible. There are always new machine learning frameworks and libraries being released. It’s important to be able to quickly take advantage of those when needed. Our science modeling philosophy is to include as much no-code configuration as possible with the ability to custom code models when needed. We rely heavily on a flexible no-code configuration framework, which enables us to maintain our models very effectively, and significantly reduces the time to value. But, we recognize that we often need the ability to custom build models to solve tangential challenges. As in-house data science teams become more prominent, they’ll need the flexibility of both no-code and custom coded models to get their ideas imbedded into a production business process.
Pricing Data Science FAQ No. 4: What goes into a price elasticity calculation?
The go-to price elasticity calculation is based on price and volume correlations over time. It’s a core component of pricing science at Zilliant, where we take noisy B2B data and get a signal that can be acted upon. However, there are other drivers that can be used for calculating elasticity. By adding in competitor data, market share data, or quote conversion data, we can gain additional sensitivity insights. Ten years ago, CPQ systems were just hitting the market, but today it’s has become an excellent source of data for our pricing models.
Pricing Data Science FAQ No. 5: Is it true that pricing models can’t use loss data?
This is a common myth! Like the point above, ten years ago CPQ data was for the most part unusable. With the maturation of the CPQ market that links orders to quotes to conversions, we have the full chain of information. In the past, this data was separated between internal systems, but now they are fully connected. When you add your pricing solution into that stack, you create a closed loop system for collecting and acting upon data. This has helped enabled techniques like A/B price testing, which ultimately produce optimized prices that you can have a higher confidence in.
As Lee said, “It’s easy to draw a curve on whiteboard; for a pricing solution to be as effective as possible it must do more: take a billion rows of data, run those through a model, deploy the guidance out via a robust API, and close the loop to allow the model to grow smarter over time. On top of that, you’ll need experts who have ‘been there’ and can guide the change management with your pricing and sales organization. Getting it all right makes all the difference in the world.”