The dynamic database contains individual records (or summaries, or aggregation) of events, transactions, states, results and outcomes.
Structured databases enable actuals to be recorded against desired states, plans, key performance indicators, targets, budgets, etc. In this way, performance can be measured of each factor, participant, patient, treatment, staff member, location, item, supplier and manufacturer, by linking ID and Auto ID to master data and thence to the corresponding dynamic data base(s)(DDBs).
DDBs often contain both physical and financial values. Both types need to be well structured according to the expected use of the data for analyses, records and audit purposes. Badly designed DDBs present substantial problems in terms of both accessing data and also in achieving cost-effective response times.
Both design and tuning require a great deal of experience. There are relatively few individuals who understand well the ways in which users will want to use data, while at the same time having the skills to get the best out of current data base technologies.
A wealth of badly structured information results in a poverty of attention and understanding. Although data warehouses and data mining techniques have improved in recent years, and immense computing power is often available, it is very important when structuring DDBs, and their links to Identities and Master Data, to understand the potential relationships among data entities and also to anticipate the questions which are likely to be asked. Otherwise, answers are unlikely to be timely, meaningful or cost-effective.
This involves considerable skill in deciding what data to include directly in the DDB, and what to select via the Master Data Base as required. For example, frequent analyses may be required of the value of business done per item and in aggregate per customer or supplier. Less frequently there may be questions about physical volumes per item per value chain location.
Data modelling and architecture
Some of the most important advances in data base methodologies took place in the 1970s and 1980s with the advent of Relational Data Bases, based on the work of EF Codd, “popularised” by James Martin, and translated into commercial reality by such companies as ORACLE. The key principles of data base design and use now require reinforcing in the context of value chain management. There needs to be a more comprehensive and integrated approach to Data Architectures in order to define better data entities and relationships at the time of Business Process Modelling. Indeed, advances need to be made in the optimising capabilities of business modelling tools, which are too often descriptive rather than heuristic.
Every few years a software offering comes along which promises you the ability to analyse all your internal and external data without having to define identities and structure your master data and dynamic data. This is improbable if not impossible.
Adaptive intelligent agents
There are some promising benefits to be had from the use of Adaptive Intelligent Agents software and related developments, which have the ability to examine and learn from “raw” value chain data (see Reference G). This is not the place to discuss their potential. However, the use of standard, global value chain identities and data as described in this paper is fundamental to the success of all value chain management, and this will also make such Adaptive Intelligent Agents more beneficial.