This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Putting big data into context

06 August 2017

Engineers at process plants are now awash with data from a variety of sources. Michael Risse explains how analytics can be used to create actionable insights from this data to improve business outcomes.

Big data in process manufacturing is simultaneously old news, new news and big news. It’s old news because while many writers and businesses started paying attention to big data around 2011, it has been a factor in process manufacturing plants and facilities for decades. 

Along with DCS, SCADA and HMI systems came process historians to store and manage the petabytes of data associated with modern operations. Many process automation vendors will tell you they’ve been ‘doing big data for years’ because the generation, storage and management of large amounts of data is nothing new to most process plants and facilities. 

What’s new?
But there is something new, or at least more recent, about big data in process manufacturing. This new component encompasses not only the increasing volume of data generated, but also the variety of data types which include, but aren’t limited to, mapping and location, video, remote asset, ERP, asset management and pricing systems data.  

In the generally accepted definition of big data – coined in 2001 by Doug Laney, now with Gartner Group – this is the ‘variety’ component of big data, which fits along with volume and velocity as the three pillars of the modern data environment. Laney didn’t use the term big data, but he did correctly identify the characteristics of big data, which endure to this day – lots of data from many sources created at an accelerated rate due to the low cost of generation, collection and storage.

The big news about big data for process manufacturing is the opportunity to make better use of the data available, regardless of its source, to help improve business outcomes.  This could be data from operations leveraged by the business side to improve pricing and cost decisions. Or, data from the business side used by operations to improve tradeoffs in production priorities, or many other uses. In any of these use cases, the key to success is contextualisation of the data.  

In process manufacturing, this is typically thought of as an L1 or L2 data mapping effort – for example associating assets with batch IDs – but this new opportunity is at the L3 and L4 levels. At these levels, it’s not just associating sensor, asset, batch, process analytics—but also includes process, profit and plant analytics. 

Just as an operator can sit in front of an HMI depicting plant status, an engineer can now sit in front of a screen with a visual presentation of all the data sources available to him or her for an investigation or optimisation effort. This self-service data access for ad hoc analytics is now a reality, which is a significant new big data development.

Analytics in action
Using new predictive analytics capabilities, engineers can know in advance the likely condition of an asset in terms of required repair or maintenance. But while the engineer may be aware of asset conditions, they don’t know the implications from a profit and loss perspective because they can’t see the big picture from a plant manager’s point of view. 

In some situations, costs saved in one area may lead to additional costs in another. For example, reducing throughput to extend asset life in one area could negatively affect downstream production to an extent greater than the savings realised. 

This is the opportunity of big data in the hands of an engineer, to enable modern analytics across data regardless of source and type to inform better outcomes.  The engineer must have access to the data, contextualise it and execute the analytics—all within the time it takes to improve an outcome. If these tasks take longer than a batch run time, for example, they are not helpful. This is a common situation due to the complexity of collecting data from a variety of sources, cleansing it and modeling it.

But with the right data analytics tool, the desired results can be produced quickly, as these uses cases demonstrate.

Problem: When moving from experiments to full-scale production, the reactor yields realised in the lab could not be duplicated upon scale-up. The causes were difficult to identify using manual methods and existing data management tools, primarily spreadsheets.

Solution: Bringing together data from disparate sources associated with the lab and full-scale production, with the help of Seeq productivity applications, allowed this pharmaceuticals company to quickly compare the two processes and ascertain key differences. Full-scale production parameters are then adjusted to improve yields.

Electric power grid management
Problem: An electrical grid management group was unable to create a mathematical model to predict how much load industrial users would shed in response to an electrical rate change.

Solution: A model was built using various data sources to predict load shedding in response to rate changes. All operators now consistently identify events mathematically, instead of only the best operators occasionally finding insight using their intuition. Using these mechanisms, the group now has closed-loop control of grid loading.

Open-pit mining ore truck operations
Problem: Trying to use the abundance of data produced by the large and complex ore trucks working the mine was difficult due to the many sources and forms it took, and the number of historians collecting this information.

Solution: Seeq gathers data from all the disparate sources in all its forms and transforms it into one common format for analysis. Complex questions such as engine loading under very specific conditions are now easy to answer, leading to improved operations.

Big data is here in process plants and facilities, and in ever greater quantities from a wider variety of sources. The challenge is figuring out the best way to present this data to the engineer so they can quickly make decisions to improve outcomes. Modern data analytics solutions address this issue by visually presenting data to users, and by providing them with tools to interact with and produce results from the data.

Michael Risse is vice president and chief marketing officer at Seeq Corporation.

Contact Details and Archive...

Most Viewed Articles...

Print this page | E-mail this page