Showing posts with label Operational platform. Show all posts
Showing posts with label Operational platform. Show all posts

Sunday, March 15, 2015

Multi-site standards have to be economically viable, but Operational Value of standards is the driver vs IT requirement

While flexibility allows us to deal with the plant floor reality, it also comes at a cost and thus requires governance. This is typically where the IT and Engineering perspectives tend to clash:

  1. Standardization (what Corporate IT desires): How to deploy “out-of-the-box” or packaged solutions that reduce risk and time-to-value in implementation across the plant sites? Increasingly Operational value is driving standards and platforms.
  2. Flexibility (what Engineering desires): How to support the various customizations to accommodate the heterogeneous nature of the process within a plant site?

But with the growing demand for agility and ability to absorb new production plans, new product introduction with minimal impact to day to day operations. Combine this the ability to “accommodate variability” in automation systems often from different vendors across multiple plants, or equipment, as well variety in team skills, and experience. The implementation of platforms combined with standards provide the necessary abstraction to “accommodate” this variation. So move to standards is growing driven more from the operational continuity drive than IT (which drove it based upon cost of implementation and sustainability).


To solve the above two seemingly opposable expectations, large enterprise users of a platform use a Center of Excellence approach to centrally manage the template library while helping orchestrate each of the plant’s technology roadmap in a way that is aligned to their Continuous Improvement journey.
The illustration below maps (at a high-level) the governance process of how templates are created, maintained, and modified to support the rollout across a multi-plant standardization effort.

Many of the most successful companies driving standards, are now seeing the rewards and return through agility to absorb new plants into their organization, yet leverage the existing unique automation, plant floor systems.

But so many of them comment to that they learnt the hard way the need for governance, yet site collaboration to make the standards effective and adoption successful. Too many state building standards from the corporate center out seems logical, but in reality so much knowledge is in the field and the need for capturing that experience back into standards is key. Plus the shift with standards away from a project DNA to more of “product” life-cycle DNA is key.

The important learning is that standards are part of a program, they part of learning, but return is significant now not just IT point of view but from an “Operational side” and this is where the significant economical returns are seen through operational consistency, and agility. Understand that standards is a program, clear understand the required governance to succeed long term, and investment up front with the field so the standards will be adopted. Combine this with clear kpis to understand the reason why your implementation a platform and standards so the value can be measured for the long term, as this is a long term initiative that must enable sustainable innovation.

Monday, March 25, 2013

Why monitoring is, giving away to exception based ”Self Aware” philosophies?


Over a year, ago I jointly authored a whitepaper with leading thought leaders at Shell on the future of oil and gas field systems

Establishing a Digital Oil Field data architecture suitable for current and foreseeable business requirements”.

The paper raised a number of concepts one of them, the need to move away from monitoring to exception based philosophies in the operational system. Over the last 2 weeks discussing with industry thought leaders in oil and gas, power, infrastructure and mining, the growth in data from the field through smart devices is accelerating the shift to a “self aware” / exception based systems. It has been fascinating to see how in a year the increased realization of a different approach to operational experiences to be able to enable effective decisions and actions. As the shift to exception base systems brings in concepts such as:

  • “Self Aware” entities that will either be in the smart device, or higher level if it a smart process that can detect conditions and trigger notifications and operational procedures and awareness
  • Advanced Process Graphics: the shift to uncomplicated view and easily identification.
  • Situational Awareness: the ability to focus understand, associate related knowledge to rapid decisions and effective actions.
The paper centered on Digital Fields, but the principles apply across industries:
“Another key instrumentation requirement is to report by exception, i.e. Sensors to have a remotely configurable ability to detect and report changes. This approach will minimize source data flows and as much as possible distribute intelligence to the lowest possible level and thus minimize data volumes/complexity in PAS and higher level systems.

For example, consider a well head pressure transmitter. With this approach,  the transmitter will be smart enough to recognize and report only changes greater than say one percent, however, an authorized user should be able to remotely change the reporting threshold, to say .5% if so required. Compare this to the current approach where all data flows through PAS systems to historians which, ironically, store large volumes by filtering data in a similar manner. Hence the virtue of filtering at source, minimizing data transfer volumes and minimizing data storage in higher level systems. Note, exception reporting at source applies to analog and vector/matrix parameters - digital parameters naturally report by exception.”
The above extract from the paper outlines the concept, and why, the essential item is that monitoring is not practical as the data levels exponentially increase from the field. We need to reduce data on networks especially over distributed networks, and by going to local detection and “self analysis” will do this. The “self aware” approach will enable local detection of a condition that is escalated to operational people who can drill down, and draw on local data as needed. This approach also means intelligence is local or close to the source. How real is this? Last week an opportunity came to me where a gas well head will have all it is instrumentation and control on the well, and there will be a web server/ service on this well, and there will be a wireless 3G connection. A follow on discussion was at what level in the architecture would this detection be, it should be as a low as possible, down in instrument, controllers, but also discussion about “smart/ intelligent” processes, and process units deployed at the Operational/ supervisory platform layer of the architecture bringing together information, data from multiple sources and taking exception.
The above diagram shows the concept of smart process well, more than a data structure, and why a data centric architecture on historians will not be satisfactory. There is a need for an “Operational Application Platform” that enables smart “living” entities that can be distributed across an industrial landscape, and managed as standards. Feeding into local data storage systems, but doing more, by trigger exceptions, events, and executing escalation and awareness across the operational team, so enable alignment and responsiveness in this every growing operational environment.
I will expand on the "Operational Application Platform" vs the "Enterprise/ Data centric Historian " strategies both are valid, but in different uses, and there seems to be some confusion.