Addressing the perils of DIY DCS

20 December 2011

Tim Sweet, of Honeywell Process Solutions, discusses the ‘top seven’ issues to consider when evaluating a DCS, versus building a DIY distributed control system using a PLC-based architecture.

The DCS architecture has always been focused on distributing control on a network so that operators can monitor and interact with the entire scope of the plant. As such, the classic DCS originated from an overall system approach. Coordination, synchronisation and integrity of process data over a high-performance and deterministic network are at the core of the DCS architecture.

PLC architectures have always focused on flexible and fast local control. Recent advancements in PLC technology have added process control features. When PLCs and HMI software packages are integrated, the result looks a lot like a DCS. But, all is not as it seems. This is very much a ‘do-it-yourself’ (DIY) approach with technical risk and added costs that may not always be obvious.

Traditionally, DCSs have been more expensive to purchase than a PLC-based system and many processing plants had lower demands in terms of production rates, yield, waste, safety and regulatory compliance than they are experiencing today. A PLC-based system offered a lower capital investment and from a functional point of view was ‘good enough’. However, times have changed. Demands on manufacturing companies have risen - and the purchase price of the DCS has come down. As a result, many control system engineers, maintenance managers and plant managers are taking a fresh look at the trade-offs between a DCS and a PLC-based control system architecture as they plan their automation capital expenditures.

The debate over the virtues of DCS and PLC has been ongoing since the two architectures came into existence 40 years ago. One might think that enough has been said, that the debate might well be over. But it is gaining strength! As functionality differences narrow and price points align, the debate is getting more intense.

Network performance
Good network performance starts with proper network planning, which can only be done with an intimate knowledge of the communication behaviour of each network node and the protocol used to carry network messages. Most process automation suppliers have taken care of this requirement. They should provide best-practice information so the user starts with a sound network design for their control system. Contrast this to the DIY world where the application engineer is the first to ever put a particular network topology together.

Once the network planning and installation are complete, it is time to see how the network performs. The same network topology can be subjected to a wide variation in communication traffic based on the amount of data acquisition, alarm reporting, historisation, peer-to-peer messages and backup tasks. Supplies can take care of this through comprehensive maximum topology testing. Honeywell, for example, subjects its Experion network to hight levels of message volume in test labs to ensure reliable network performance in demanding environments.

Assuming that the user has planned and installed their network, the plant has reached maximum production capacity, and everything is working as expected, how does the user keep it that way?

Honeywell can provide its Fault-Tolerant Ethernet (FTE), a redundant industrial Ethernet networking technology utilising inexpensive off-the-shelf components to provide a high-availability solution. FTE continuously cares for the process control network (PCN) by providing ample network diagnostics that are tracked and reported as a part of the base Experion system.

Control performance
Good process control is built on reliable and repeatable execution of the control strategy. The process controllers that are a part of the classic DCS architecture have fundamentally different operating philosophies than found in a PLC. While the PLC runs ‘as fast as it can’ the process controller favors repeatability. That means, the control strategy runs on fixed clock cycles -running faster or running slower are not tolerated.

Other system services are also designed to give priority to solving the controller configuration. For instance. Controller-generated alarms, for example, can be throttled if they are interfering with control and recovered later when process disturbances slow down. This can only be effectively managed by tightly coordinating the control generating the alarms, as well as the alarm and event subsystems that collect, store and report those alarms. Again, a system approach from the onset is what the DCS is all about.

HMI graphics
Suppliers of HMI software packages typically boast about how easy it is to design graphics for the operator. But designing graphics is not how a process plant makes money. Imagine a process control environment where one doesn’t need to build graphics…because they are built for you?

With a system where the control and operator environments are designed and built together, often 90% of what is needed to run a process plant can be made standard. For example, Honeywell’s experience enables it to provide hundreds of standard faceplates, group displays and status displays that are vital to safe and efficient plant operation. These graphics are provided out-of-the-box and are ready to be instantiated from the Experion Server to any connected operator station.

Over the past 10 years, Honeywell has supported the Abnormal Situation Management (ASM) Consortium to define safe display principals and practices, and to build that know-how into its Experion HMIWeb technology and standard displays. Every HMIWeb graphic follows the same visual and operational conventions as defined by ASM. As a result, the user gets fewer operator errors - with little or no effort - thanks to better, safer HMI graphics.

Control algorithms
By creating function blocks with a complete set of parameter-based functions, the user can develop and fine-tune control strategies without designing control functions. All necessary functions are available and documented as configurable selections. The application engineer simply assembles the blocks into the desired control configuration with a minimum of effort. A self-documenting, programming-free controller configuration is what makes the DCS architecture efficient to engineer and troubleshoot.

Take, for example, a commonly used process control function, the PID block. In Experion LS, using a DCS-style global data model, all aspects of the PID function are contained in a single tabbed configuration screen. Various algorithms are available for selection. Parameters used for alarming, trending, and history in the HMI are configured here. No more configuration of these parameters is needed to populate HMI configuration.

Application software
In the world of DIY, one can find all of the applications needed to run a process plant. Just look in the catalogues from PLC and HMI vendors. Customers can make a list, place their purchase order and soon licenses, DVDs and downloads will begin to arrive. But isn’t it easier to order one model number and receive everything needed at once via the same download or DVD? One license can supply all of the controlware, a data historian, trend objects, business integration software, and graphics needed to run a plant. The capabilities of DCS architecture allow all of the control applications to load correctly, are guaranteed to be the correct version, and are tested to work together.

Next, think about the 20-30 year life span of an automation system. How often will the typical user need to expand or modify their system? How many times will they want to add a new control technology to the system? By partnering with a major DCS supplier, users are assured that they will always receive a complete, tested suite of applications as they expand and upgrade.

Data management
There is an old adage that goes something like: “Show me a person with a wristwatch, and I will show you a person who knows what time it is. Show me a person with two wristwatches, and I will show you someone who is not sure.” Multiple data models spawn multiple data elements representing the same piece of information. This happens when the DIY distributed control system is pieced together. When piece parts are brought together to form a system, the various data models must be synchronised and maintained. A burden exists on application engineers and system administrators to accomplish this task.

With DCS architecture, the entire data model has been conceived to cover all parts of the system. One data owner can provide that piece of information to any application or service anywhere in the system. The issue here is not the number of databases. The key is having a single data model so, no matter where a data element resides, it can be used by any element of the architecture and that particular data element is never duplicated. A comprehensive data model does not necessarily mean one database, but it does mean only one location for any given element of data.

Batch automation
The comprehensive nature of the DCS architecture has long been favoured for batch automation projects. More than anywhere else, batch requires careful coordination between phases, units, recipes, formulas. Even the classic DCS architecture has been challenged to provide a complete ‘’packaged’ solution because of all the various and diverse elements in a batch environment. For this reason, many batch automation projects have resorted to a myriad of packages brought together to form the solution.

With Honeywell’s Experion Batch Manager, the various aspects of the batch automation solution are captured in a single DCS data model. All elements needed for batch management and execution are run in the process controller, or a redundant pair of controllers when robustness is desired. There is no longer a need for a PC operating as a batch server. Because all batch elements are handled in the controller, we experience faster batch execution, reduced cycle time and increased throughput. The operators learn one, consistent environment for alarms, security and displays so that fewer errors are made. From an engineering and maintenance perspective the advantage is in learning and supporting one tool with no duplication in engineering.

Contact Details and Archive...

Related Articles...

Additional Information...

Print this page | E-mail this page