To create planograms, the customer supplies a third-party software with a variety of different master data. In the course of the project, the data supply is to be adapted and expanded so that the solution is able to create market-specific planograms.
To feed data to the third-party software, the customer uses a large number of batch jobs to map a complex ETL pipeline. The jobs were based on Spring and were operated on the mainframe (z/OS). To be able to scale with the expected increase in data volume, the components were migrated to the Google Cloud. For this purpose, the Spring-based applications were migrated to Quarkus and operated in the serverless container runtime Cloud Run. The control and orchestration of the domain-oriented job chains is handled by Cloud Composer, a service managed by Google Cloud to run Apache Airflow. Existing data sources were extended to provide information about market-specific listings.
To increase sales, the listings as well as the placement of the products on the shelf are to be adapted to the local requirements of the customer base. The creation of market-specific planograms is not possible manually due to the number of the customer's stores and must therefore be supported by a high degree of automation. Category management is thus enabled to create a variety of planograms by configuring the applications involved in the process.