Data management has bothered large companies for decades. Almost all firms spend both time and money on it and still find the results unsatisfactory. While the issue does not appear to be growing worse, resolving it is increasingly urgent as managers and companies strive to become more data-driven, leverage advanced analytics and artificial intelligence, and compete with data.
Most companies struggle with a few common but significant data management issues:
- First, companies have concentrated on the technical capabilities of data management, which are controlled by the IT function and are needed to acquire store, and move data. It is difficult to build technical “pipes,’ by doing it they have focused more on infrastructure and much less on the outputs: the data products that are used to make decisions, differentiate products and services and satisfy customers.
- Second, data is created in different parts of the organization to meet the needs of various departments, not for later use by others in data products, business decisions, or processes. Contrast that with a physical product, such as a car, where components such as the chassis and the starter are designed with the end product in mind.
- Third, most organizations lack a common data language. Data is subtle and nuanced and has different meanings for different people in different contexts. Exacerbating this, some departments, taking ownership for “their data,” may be reluctant to share. Or while willing to share, they will not make time to explain these nuances so others can use them effectively. This leads other departments to set up their own “near-redundant” databases, adding to the overall confusion.
- Finally, companies are increasingly interested in what happens outside their walls, tapping external data to answer a variety of questions. But external data is largely unmanaged, with little supplier qualification or data quality assessment.
Data supply chain management, with data products as the end result of the process, can help to address each of these issues. It puts equal emphasis on all phases of data management from collection to organization to consumption of data products. It is a means of balancing the benefits of common data with those of unique and tailored data in products, and it is equally suited to internal and external data. Relatively few companies employ data supply chain management, but those that do tend to report better results.
Companies have always produced data products in the form of financial statements, reports to regulators and shareholders. The range and importance of such products are continuing to grow. For many, the goal is to embed analytics and AI-derived models into products that serve both internal and external customers. Morgan Stanley’s Next Best Action, LinkedIn’s People You May Know, Google’s many search offerings, and MasterCard’s SpendingPulse and Business Locator are good examples. With the issues cited above in full display, “wrangling” the data takes far longer than building the model and still doesn’t solve all the issues.
Fortunately, there is a better way to source high-quality data. It builds on the process and supplier management techniques used by manufacturers of physical products. In particular, manufacturers extend deep into their supply chains to clarify their requirements, qualify suppliers, insist that suppliers measure quality, and make needed improvements at the source(s) of the problem(s). This enables them to assemble components into finished products with minimal “physical product wrangling,” improving quality and lowering costs.
One organization employing supplier quality management in its data supply chain is Altria, the US-based provider of tobacco and smoke-free products. Altria depends on point-of-sale data from more than 100,000 convenience stores daily to complete its market reports and analysis. A team reporting to Kirby Forlin, VP of Advanced Analytics, manages this base. Data requirements are spelled out in contracts, and the team aims to help stores meet them. Altria concentrated on its most basic requirements. Quality was poor, with only 58% of daily submissions meeting them. But the Altria team worked patiently, improving quality to 98% in three years. As the score for basic quality improved, the Altria team added its more advanced requirements to the mix. As Forlin noted, “This is a work in progress. The evidence that we can increasingly trust the data saves us a lot of work in our analytics practice and builds trust into our work.”
The data supply chain can be established within a company by following some of the same steps used in process and quality management for physical supply chains:
- Establish management responsibilities. As step 1a, the chief data officer or product manager should name a “data supply chain manager” from their staff to coordinate the effort and recruit “responsible parties” from each department (including external data sources) across the supply chain. Step 1b is to put issues associated with data sharing and ownership front and center. We find that most issues melt away, as few managers wish to take a hard stance against data sharing in front of their peers.
- Identify and document the data and associated cost, time, and quality requirements needed to create and maintain data products.
- Describe the supply chain. Develop a flowchart that describes points of data creation/original sources of data and the steps taken to move, enrich, and analyze data for use in data products.
- Define and establish measurements. Generally, the idea is to implement measurements that indicate whether requirements are met. Start with data accuracy and the elapsed time from data creation to incorporation into a data product. Measures will vary for each data product’s supply chain.
- Establish process control and assess conformance to requirements. Use the measurements of step four to put the process in control and determine how well the requirements of step two are met and to identify gaps.
- Investigate the supply chain to identify needed improvements — overall and for particular data products. Determine where gaps uncovered in step five originate in the flowchart of step three.
- Make improvements and continuously monitor. Identify and eliminate root causes of gaps identified in step six, and return to previous steps if necessary. Continuously monitor both the input data and the data products, looking to improve products and for the new data and better sources needed to do so.
- “Qualify” data sources. Companies will continue to employ increasing numbers of external data suppliers and it is helpful to identify those that consistently provide high-quality data. Audits of their data quality programs provide the means to “qualify” those that do and identify areas of weakness in those that do not.
Key Bank, a top 20 US bank in asset size, uses a broad data supply chain concept to structure its data management initiatives. It breaks its process into the areas of “capture/organize/consume” and attempts to improve efficiency and effectiveness in each area. It recently shifted much of its data storage and analytics to the cloud and found major improvements in flexibility and speed across the supply chain. Its consumption activities were historically focused on classic business intelligence capabilities, but now it also has a strong data science function.
That necessitated a change in the supply chain toward greater data virtualization and the ability to construct views of data that cut across different data marts and that incorporate external data as well. The bank has been able to use its data supply chain to rapidly develop new banking products that rely heavily on data. For example, it was one of the largest lenders of Payroll Protection Plan loans in the US, and also recently introduced a national digital bank for doctors. Mike Onders, the bank’s chief data officer, is effectively the data supply chain manager. He and his staff have evaluated the ability of the bank’s data supply chain to supply a variety of needed data products.
All companies should aggressively manage their most important data supply chains. Data is as important an asset to businesses as any other type, and data products are increasingly as important as physical ones. The same thinking that has improved physical supply chains for decades is proving equally valuable for data.
Protecting data and cybersecurity are the next concerns and how to manage access to all data easily. CYRISMA™ is a revolutionary cybersecurity platform that helps organizations manage risk without the usual headaches associated with enterprise cybersecurity tools. https://www.wapacklabs.com/cyrisma The service was designed for organizations that demand a clear and immediate return on investment, CYRISMA™ simplifies the process of identifying, assessing, and mitigating technical risks, all while eliminating high licensing costs, long deployment times, and burdensome technologies. Their data-centric approach to cybersecurity streamlines your cybersecurity efforts by focusing on what is important while providing a simple, easy-to-use platform for identifying risks, strengthening weak configurations, and neutralizing risks through accountability.
At Red Sky Alliance, we can help these teams with services beginning with cyber threat notification, analysis, and complete elimination of cyber threats from both the inside and outside of networks. Our team members will be happy to hold a brief call with your team to help them better prepare for cyberattacks, malware, and ransomware. And what if this call led to savings in current duplicated services and forecasted need for additional personnel?
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. For questions, comments, or assistance, please contact the lab directly at 1-844-492-7225, or firstname.lastname@example.org
Weekly Cyber Intelligence Briefings:
- Reporting: https://www.redskyalliance.org/
- Website: https://www.wapacklabs.com/
- LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings