top of page
bacckgroundcover123.png

Process Tempo Insights

  • Phil Meredith

You are processing data all wrong!



Every byte of data a business collects is used for a singular purpose: To facilitate decision making. Whether they are forward looking or backward looking, short-term or long-term, decisions require data and the entire purpose of collecting data should be geared to support them.

Organizations that can make faster decisions have a distinct competitive advantage. Therefore, the easier it is for decision makers to get their hands on rich, timely, and useful information, the more agile and competitive the organization becomes.

Fact: If the business is growing (or changing), there will be a constant and growing demand for data. Unless the data processing team is perfectly staffed and can generate data on demand, they will always face a backlog. The more complex the IT infrastructure, the greater the backlog. The greater the backlog the less agile the organization.


Organizations need to rethink their data processing strategy to one that places emphasis on agility and improves the flow of information to decision makers.

Just-In-Time Data Processing

In the 1960s and 70s, Japanese auto manufacturers perfected a system known as Just-In-Time (JIT) manufacturing. The idea behind JIT manufacturing was to minimize inventory and create a lean manufacturing process that enabled them to rapidly assemble vehicles from pre-built sub-components. This era also produced the concept of Total Quality Management (TQM) which empowered those on the factory floor to immediately address issues with product quality. These two concepts revolutionized the auto industry and put American auto companies on their heels. Can we take this approach and apply it to data processing? The answer is a solid yes.

Symptoms of a data processing approach gone bad:

  • Is there an over-reliance on spreadsheets or Shadow IT? Spreadsheets are not only risky, they age very fast. Over reliance on them is a big problem

  • Do data requests take longer than a single day to be fulfilled? Imagine data requests being fulfilled instantly. How much more agile would your business become?

  • Does the organization suffer from database bloat? Stovepipe ETL? A complex web of data pipelines trying to serve the needs of 1,000s of databases is an ugly mess that needs to be addressed ASAP!

Just-In-Time (JIT) Data Processing and Total DATA Quality Management (TDQM)

Let’s consider a real-world example: The marketing team wants to identify which of their customers might be most receptive to purchasing a new product. In short, the marketing team needs to make a data driven decision and therefore a new demand is placed on the data processing team.

The old approach: someone from the marketing team makes a formal data request to the IT team. Weeks or months pass and the request is eventually fulfilled. The organization suffers from the inability to make timely decisions. Data quality problems are discovered way too late.

The new approach: a member of the marketing team compiles their own data from a web portal designed to allow non-technical users direct access to clean, organized data. They combine various datasets to create their own reports and they access a suite of easy-to-use analysis features to identify which customers they wish to target. Data problems are uncovered and corrected much sooner. The organization benefits from smarter, data-driven decisions made in a timely manner. Both the marketing teams and the data processing teams are heroes.


A Better Approach

First question: could the data processing team have predicted the marketing team’s needs? The answer is absolutely. The data processing team needs to think in terms of forecasts, demand, and just-in-time data processing and build a continuous communication pattern to facility this.

With a forecast in place, the data processing team can work proactively to meet future needs. Prepositioning data so that it can be leveraged as needed can buy the data processing team much needed time to address more complex needs. This is the heart of the just-in-time approach and once in place, will make the organization much more agile.

Next, the data processing team will need to describe and categorize data so that a non-technical user can understand it. The marketing team will need customer data, product data, purchasing history and customer support data. Simply acquiring the data is one challenge, understanding it is an entirely different one. The end consumer must clearly understand the data they are looking at and so it is imperative that the data processing team maintain a non-technical data catalog on their behalf.

A number of logistical issues will also need to be addressed. It is likely that the consumer of the data won’t know exactly what they need. They will likely come to the “well” several times so making data access as simple as possible is critically important. Complex concepts such as data joins, JDBC connectors, or SQL, should not even appear in the dialog as end consumers seek the data they need.

Finally, can the organization leverage the data processing it has built in the past? Many organizations create a new data pipeline for each new request. Over time, they end up with thousands of these pipelines and a massively complex data environment. This complexity means it is much easier to start a new pipeline versus trying to leverage an existing one, further perpetuating this scenario. All of this is a tremendous waste.

As an example, a large auto manufacturer simplified their data processing infrastructure by creating reusable data pipelines into Process Tempo. Traditionally, their data processing team had used Alteryx to create a unique workflow for each data request. Overtime, this group found themselves trying to manage hundreds of data pipelines and the Alteryx team struggled to keep up with demand. With Process Tempo in place, the Alteryx team was able to simplify processes, reuse their work, and was better positioned to support future requests.

Introducing Process Tempo

Building a JIT and TQQM data processing capability requires a data processing platform purpose-built for it. The original data warehouse concept was designed to preposition data for future use. The challenge was the immense amount of engineering involved and the difficulty engineers had trying to adapt the data warehouse to meet new requirements. This put the brakes on timely, data driven decision making.

Process Tempo solves this problem and provides several key features needed for JIT and TDQM data processing:

  • Process Tempo consolidates data into a single, flexible platform greatly reducing the engineering required to position data for future consumption. This approach also greatly reduces the number of data pipelines

  • Process Tempo provides a user-friendly interface designed for non-technical users. It enables a much broader user community (versus just the engineering crowd) which is a foundational component of improving data quality

  • Process Tempo provides a user-friendly data catalog so that at all times users have a solid understanding of the data they are viewing

  • Process Tempo allows self-service access to data. This means users get the data they need when they need it. A core requirement of JIT data processing

  • Process Tempo provides a much more flexible platform that enables a small data processing team to meet the needs of a wider group of decision makers

Summary

Organizations that struggle to produce timely and accurate data operate with less agility and therefore struggle to compete. The demand for timely and contextual data will only continue to grow and so these organizations need to rethink their data processing strategy.

Process Tempo provides a modern approach that can serve as the foundation for Just-In-Time data processing and help serve as a core component of Total Data Quality Management. These capabilities will help organizations become much more agile by providing decision makers with high quality, contextual data, in a timely fashion.


bottom of page