Demand Forecasting for New Products – Take a sneak peek into the future with AI/ML apps
How AI can help us do demand forecasting for new products?
AWS has several managed AI services that let customers get benefits of AI and ML without hiring a team of Ph.D. data scientists to do all R&D work and then ML programmers to build ML solutions. These managed services are production-ready for training as soon as a customer subscribes. Once trained, the predictors associated with the service can be called from CLI or API to return predicted data points
Demand Forecasting for New Products through Cloud-Native services
One such managed AI service is Amazon Forecast Service for time-series data forecasting. It is a fully managed service which means AWS will take care of its security, scalability, high availability, and all nonfunctional aspects of the service.
The use cases of the Amazon Forecast service can be wide-ranging, but predicting prices, demand forecasting for new products, and any kind of resource planning are its key use cases.
Benefits of Demand Forecasting by AWS Managed Services
Aside from getting the benefits of being a managed service, the key benefit of Amazon Forecast is its powerful functional aspects where it selects the best algorithm based on data schema and volume. It is not a simple ARIMA forecaster and can utilize some of the state-of-the-art algorithms to build predictive models. These include
and ARIMA. it has built-in support for missing values handling and can choose the best method for handling missing values. It also has several pre-build datasets for testing and learning.
The pricing of the Amazon Forecast service is based on
- Size of training data set
- The time needed to train the data set
- Data points for prediction/forecasting
Assuming a medium-size enterprise with 10K products for price forecasting has 100 GB of historic demand data that takes about 20-24 hours of training time for a month, it will be probably paying around $400-$700 per month as a total usage cost.
for smaller datasets 5-10 Gb needing less training time and fewer predictions from the trained model, the cost reduces to a much more affordable range for startup companies.
You should expect to spend another $200-$400 for any AWS server less compute/network resources and security services that will provide a custom training and monitoring dashboard as well as expose a Public API endpoint for non-AWS applications to consume exposed service. Please consult the official pricing calculators on the AWS site for more accurate estimates.
Predicting Demands using New York City taxi fare record data
High Plains Computing (HPC) MLOps team did this tutorial project of testing the Amazon Forecast service using one of the pre-build datasets. for this MLOps exercise, we used New York City Taxi and Limousine Commission (TLC) trip record data to predict the demand of fleet size at a given taxi stand at a particular date/time
As this testing was primarily an MLOps project with the objective to demonstrate, the focus of the solution was automation for agility and well-architected infrastructure rather than accuracy of prediction evaluation.
The team used Terraform to provision all the required AWS resources including an ECS cluster, API service, lambda, Cognito pool, and all other resources to securely access the service API endpoint and Dashboard to manage training.
The solution consists of two parts. Part 1 is provisioning an ECS application that acts as a dashboard to look at the status of submitted training jobs and submit training jobs. we created a simple dashboard shown below to upload files and submit training jobs
This docker app was created using python Streamlit (https://github.com/streamlit/streamlit) to create a simple dashboard as shown above. Cognito user pool was used to authenticate and authorize user access to the dashboard. the figure below shows the deployment architecture of the Dashboard app
The figure above shows a secure, scalable, and complete serverless Compute model solution that renders a Dashboard for uploading various datasets and training Amazon Forecast service. we used AWS Cognito user pools to control authentication and authorization as well as recorded all activities in a DynamoDB table. ECS Cluster used AWS Fargate as Compute provider thus relieving a customer of any patching or maintenance of Compute resources.
Part 2 of the solution is to provide an API gateway to make trained predictors available to other applications via REST API. A lambda function would take the payload of data points for which demand was to be predicted which in this case was taxi stand identifiers and predict the demand for each identifier for the next 24 hours. the figure below shows terraform Generated artifacts to make service available to external non-AWS supply chain apps
e Cognito authenticator would only allow an authenticated user to use trained predictors. AWS lambda used python FastAPI lib to provide a backend port for the AWS API gateway. DynamoDB was used to identify trained predictors and for logging usage.
HPC team found training Amazon Forecast service fast and easy to use. The team is planning to do a follow-up on automating accuracy measurement flow using terraform and glue jobs.
Note: You can take a look at our Data engineering and analytics services.
Committed to delivering the best
Thousands of AWS and CNCF-certified Kubernetes solution partners have unique expertise and focus areas. Our focus is on best practices in security, automation, and excellence in Cloud operations.
Please reach out to us if you have any questions.