Creation of a knowledge base on the technical meanings of the data coming from the Energy Trading and Risk Management (ETRM) system: The technical meaning of the data flowing from the Endur systems (ETRM product from Openlink/ION) to the middleware will be documented. The knowledge base is integrated into a Talend Data Catalog. Thanks to the knowledge base, the understanding of the data transported in the middleware has improved among the developers involved. The development of new interfaces and the maintenance of existing interfaces should thus become more efficient. PTA is responsible for all aspects of catalogue creation. The automatic part of the analysis of the JSON file stream flowing from Endur to the middleware is done with Python and Pandas. The technical meaning of the data fields in the JSON files is analysed with the help of the user documentation as well as the documentation of the customisations.
For the catalogue creation, the contents of the JSON file stream are analysed with Python and Pandas. The JSON files are categorised for technical classification and the data fields typical for an individual category are presented. This is followed by an analysis of the technical significance of the data fields with the help of project documentation, user documentation and expert interviews. Use cases are used to work out which types of information are most important to users and how they are used. A special challenge is the linking of the data transmitted to the middleware to its presentation in the dialogue fields in the ETRM standard software Endur. As a first step towards a solution, an Excel-based knowledge database is being built, which will be transferred to the Talend Data Catalog in the next step.
Endur is a highly configurable ETRM system used to manage payment flows and risks for all types of energy trades. The information from Endur as the source system flows to the consumer systems as a stream of JSON files via the Apache Kafka-based streaming interface. The implemented public subscribe pattern of the Kafka solution is to be made more usable through the introduction of the data catalogue and possibly expanded to a real self-service solution. The data catalogue thus becomes an elementary component of the implemented public subscribe pattern. The knowledge database will be integrated into the customer's data search and data discovery tools with the help of the Talend Data Catalog.