Saturday, June 29, 2013

Multiplying Value of Expert Community interacting with the Multiple Operational Team


Last week I talked about harnessing the experienced community for knowledge management, the concept enablement to contribute through "crowd source". This week I would like to expand the concept with the virtual expert team and how critical enabling this community of knowledge and experience to be empowered to work in real time with the plant, operational staff, enabling real-time decisions.

In January we introduced the operational landscape in 2020, where projected tenures in roles or locations to be below 2.4 years, (I believe will be less than this in many areas). The other notable observation was that 1 in 3 of the workforce will be in project work vs. careers, temporarily engaged to supply expertise and skill to a project, or situation. Giving rise to a "vacuum" knowledge effect when they leave, often causing an uncertainty, lack of confidence in making decisions.

Introducing the challenges of enabling decisions, with lack of experience.

Companies talk about their experts either in the company or in partners, who are stretched and how they keep "taking the expert to the problem" and the need to transition to "taking the problem to expert".

During the last two weeks I have been engaged with a couple of large distributed projects, in environments where there is 20% turn over of expertise from trade’s people, operational, maintenance, process experts and staff. The growth in companies investigating or implementing "operational centers" is indicative of the challenge as the experts can be located in a central location, and interact with multiple sites. In theory, this works, in practice the challenge is the expert could be on site, could be traveling, and often in order to retain the expert he must be able to contribute in a location of their choice.

Companies are also looking to multiply the value from these experts requiring they are not restricted to only contributing to their local plant, or teams. As companies look to align and manage the end to end value chain these experts could be in  operations, planning, maintenance, engineering, process, IT, control quality, etc., become critical assets.

The concept of significantly reducing the "time to performance/ expertise" is key built on a number of foundational approaches, all possible today:

1/ Knowledge systems that enable natural contribution with knowledge management to gather value and maintain the value leveraging technologies that harness "crowd sourcing", of the expert community.

2/Embedding of operational processes shifting to "intelligent work" with context information for declines and recommended actions all associated in the work item. Gathering these operational best practices from the experts, at all levels in the company and embedding into the operational system for consistent behavior to situations.

3/ Virtual expert communities that can be notified in real-time, access status and information in consistent and trusted context. Viewing the same information as the plant operational staff, but being able to apply their own analysis, experience and expertise to share with the plant operational workers for decisions. Key is the expert is working from where ever he is, on what ever device, and can collaborate, share and contribute. The expert could be in a company or a supply, but is part of the "virtual trusted expert" community seeing only the information they need, to be easily accessible to the plant user, and no effort to access.

So as a situation develops, a plant worker can focus on that situation, and see all the real virtual experts for that situation who are on line and their skill. They can chat, they can share, by dragging and dropping, and they can talk, all in a couple of key strikes or gestures. This collaboration must be natural, combined with core trust of the expert, and system that the advice from the expert is relevant to the current situation. Also, the ability to call on the knowledge system for like situations and draw information easily.

Now the experts are contributing to multiple sites, and situations without traveling and contributing to knowledge capture at the same time. This whole concept is paradigm shift in knowledge and expertise, but is the foundation for companies to be able accommodate this transitional operational landscape.

Sunday, June 23, 2013

Harnessing Crowd Sourcing for the Industrial Knowledge System


With the transformation of the workforce from the " baby boomers " to Gen Y, we see the rise of the team concept and the execution of activities over multiple skills and expertise. As the duration of people in a role or location reduces( with the norm expected to be 2 years of less), the importance of knowledge systems is going up. Companies are putting formal positions in place for knowledge management, to capture knowledge on how the plants are set up, operated and of situations. This role will be come the center of a movement to convert: ”Data to Information to Wisdom”.

The issue is the data/ knowledge today is in different forms, locations and is often “stranded”.

The real thought leadership relative to knowledge is coming from companies who are looking to “crowd sourcing” to harness the capture of experience. This is not as much about the capturing system, but the real is the transformation of culture to “crowd sourcing” contribution from the operational community of the company.

Speaking with one company last week they are putting a knowledge (wiki like) system combining this with two key adoption initiatives:

·         Make it simple to contribute no matter where people are on the job, and home to extend that it accepts multiple sources easily, and if someone works in MS Word, or Outlook they contribute and send an email that comes a posting etc. This I have seen work in the past when I was sailing you need to do a blog in a disconnected mode, so I would capture my thoughts in an email with the subject line as the title, and when I connected I would send email to an address that added it to the system. The knowledge leader last week stated he wanted to leverage the flying time, train time of his knowledge community.

·         Next was an incentive program for valued contributions and best contributions.

He recognized the short period left like the next 3 to 5 years to capture much of knowledge, but he was looking at plans to offer extensions for retiring workers to continue to contribute and be rewarded while also interacting with the active plant workers.

Now capturing is key but so is shifting this to effective “wisdom” that can easily be searched and under stood. Using tools to understand patterns, as well tag these data lakes. Capturing knowledge / experience on wiring, process behavior, to best practices to conditions, to actions. Another key discussion point was a dedicated management team like Wikipedia does that reviews, tags all contributions so the knowledge is effective. This also means deleting out of date data, reviewing common areas of searching, and holes in the knowledge map. This is not a part time job, it is a whole program if it is to be effective.

The final part of the puzzle is making this knowledge easily available and culture to leverage the knowledge. Initial trials this works easily for the Gen Y as they expect to search and understand, learning on the fly. Again the key is the ease of access from day to day tools the users are using.
Too often documentation and knowledge are after thoughts, it is good to see this transforming, even if it is just in the early adopters. This will be a growing transformation to leverage capturing such as Youtubes, wiki, and sketching tools.

Sunday, June 16, 2013

Commentary Feedback on “Third Time Lucky for MOM/ MES!!!! YES, it does have a significant chance this time!!


It is an interesting writing this blog as there is little comment feedback in the blog, but I receive a lot of email on certain subjects with substantial input. The topic of “Third time lucky for MES” I received a lot of both email and discussion face to face, this blog expands on the topic.

The first comment from many people was “that MES has been around for years and has been implemented successfully”, and I agree, but the first 2 generation architectures involved significant services while they have run extremely well for a number of years, but with limited ability to absorb change without significant cost and risk. Also, companies are expanding in multi sites and the requirement to enforce operational practices over multiple sites, again this requires alignment of sites. So yes MES has been successful in concept, but not in a sustaining mechanism.

Charlie’s comment in “The MOM Chronicles” reflects this:

“The first two MOM attempts occurred in the 1990s, and 2000s, actually were also found a primary hindrance to continuous improvement efforts because the MOM system owners were typically understaffed, under skilled, and un governed to support real innovation. “

So what is different this time? Was a common question, as mentioned in the original blog the SOA (service Orientated Architecture) actually been adopted correctly with conforming service contracts at the ERP side and business side with the “Enterprise Service Buses” (ESB) becoming a norm, not just a term. This flowing down into the operational world with vendors looking at aligning with web services, but also model centric alignment vs point integration is key. The figure below taken from “The MOM Chronicles” illustrates the concept:

The experience we have MES implementations at Invensys has pushed us to evolve our architecture, and the key areas are:
1/ The MES functional Capability has evolved in richness
2/ The plant events are now linked into the System Platform, using templates so now plant equipment and events can be templates and managed, also validation is achieved as close to source as possible.
3/ The Human interaction and the business rules and processes are no long programmed they are implemented in a model driven (workflow) environment. So now face plates/ forms that present information and interact with humans validate the data entry as early as possible and with no code but in graphical modeling environments. This area alone is transformational as I have sat down with process/ operational teams with these graphical workflows and worked through with them relative to their process, we mark up the diagram and implement fast. No longer programmers are involved in business rules it is the operational/ business / process teams work know their rules and behaviors they require and they implement.
The diagram below shows the realization of the aspects of the Invensys MES architecture, and it is all three aspects that make this a sustainable solution, that scale, and also work in a multiple site situation. But most of all it is agile aware been able to absorb change.  Again this is a different approach to tradition MES solutions which have at MES Functionality, but honestly depended on coding around the system for events, human interaction and business rules. This architecture is SOA so it is plug and play and services like the MES can run in different locations, leaving open the opportunity for MES data bases and rules to run in an elastic “cloud”, combining with the on premise interactions with the plants and people.


 

Tuesday, June 11, 2013

Fast Data, data in Context is as Critical as Big Data


I was listening to M2M (Machine to Machine conference sessions), and this interview of Chris Baker from Oracle was fascinating as it echoed what we seeing in the industrial / operational world.

Design the systems assuming massive amounts of data, for not just today but also the future. This means the architecture must be able to access the different data, put it in context, and provide patterns and exception based analysis so decisions and then associated action can be taken.

Data without the ability to take action is of no value, and it is key we move to “intelligent work” concept where information is delivered in context of role, and situation, and associated actions with it. Independent of location, device and architecture, as these will change.

Interview with Chris Baker, SVP, Oracle at M2M WORLD CONGRESS 2013 - YouTube

Sunday, June 9, 2013

Disruptive opportunity with transitional actives (FAT, training, simulation) for industrial space


As the weeks go on the discussion continues to increase around the opportunities, and growing consideration of Cloud playing in the industrial automation and operations architectures within end users. One of the emerging realization of opportunities in the transitional activities on bringing a plant on line, such as commissioning and operational learning. Not been mission critical activities, as one person put it “why would you want buy and set up hardware for activities such as FAT (factory acceptance testing) or training simulation for only 2 to 3 months?”

Sunday, June 2, 2013

Operation Intelligence(Enterprise Manufacturing Intelligence) vs Business Intelligence (BI) the Difference and it are the time it be recognized!


So many times when I visit a customer site or discuss with product develops, or engineering houses people get confused over what are the roles of each system, and they must work in conjunction but they not the same. Especially when companies already have a business Intelligence strategy and tools, but they also have process analysis tools (trending) but let's move the focus away from engineers to the consumers of the information and their transforming role in achieving operational excellence.

The question of “why a company should implement an EMI solution if they already spent money on a BI solution. They already have the “slice and dice” and analytical ability within BI, so why waste money on an EMI solution?”

The realization is that users in the real time operations require empowerment capability to make decisions, to be able to access “trust worthy information” quickly and easily. Quickly seeing status of plant, operations, and easily been able to apply limited operational analysis to answer well known “Operational situational questions”. EMI and BI have different purposes, and they are aimed at a different audience. Manufacturing-specific reporting and intelligence are different in content, context and data frequency than the data in BI.

I had a long discussion with Gerhard Greeff (Divisional Manager: Bytes PMC, MESA trainer), on this subject, and he totally agreed in the miss understanding, that people have and how often they tried to use BI tools to build operational dashboards for operations and they do not get accepted or used. Also, this exercise results in significant IT projects to build the tools, and gather the data, so often to be far less effective that MS Excel, which many operational people will configure what they want. The requirement now is for consistency of information, and measures, causing a transformation in the market caused by “Apple” that time to access and value is far more critical than “perfection on information layout” introducing the concept of “good enough” will do. Like we do on many applications on smart phones where down loaded applications based upon a functional need, and have limited ability to change it, except the basic configuration, but it works and is delivering value fast.

You may be asking can you clarify the difference, so I have used some text from Gerhard’s paper in “The Mom Chronicles”.

“Data in a BI solution is typically at the same low frequency as that of the ERP system such as daily values. For a Plant manager that wants to know what is happening on a shift or hourly basis, BI will thus be inadequate. BI tools are typically not designed and implemented to take into account the real-time nature of manufacturing operations and its very large data rate. As such, BI are not able to handle the high frequency of data receipt and the required fast response-times of reporting/visualisation required by manufacturing operations.

Executives use BI as strategic analysis and decision-making tools for the company. From their BI systems, they can see the profitability of individual plants and sites and, as such, can make the decision to close down a plant or to change the manufacturing strategy. They typically work on confirmed and validated numbers and results as they want to ensure they have accurate data when they make the decision. These validation/confirmation or auditing steps often add considerable time between the actual event and the time the data end up in the BI solution.

Site-level production personnel however cannot wait for the niceties of auditing and validation before they take action. If a report or an EMI dashboard indicates that something is wrong, it is their responsibility to investigate and take corrective action. If a feed-rate is lower than planned, the production manager is not going to wait for the confirmed result in the BI system tomorrow before he takes corrective steps. No, he is going to investigate or have someone investigate for him. If it turns out to be a false alarm, then he is glad as it is a crises averted. If something is wrong, he takes corrective action, or at least knows and expects the bad results from the BI system tomorrow. Production executives hate surprises, even good ones.

EMI systems thus have a two-fold purpose:

1. To provide early warning in real-time for potential problems in order to make decisions or take action, and

2. To provide “slice and dice” data on historical data and Operational data for process improvement, and operational status, delivering the information in time, equipment, and operational context.

EMI has data available at the granularity and frequency delivered by the individual applications. This can be from seconds to days, depending on the specific operations requirement. The data is also available per individual piece of equipment, line or processing unit and can also be rolled up into hours, shifts, days or weeks for any of these. The granularity of EMI systems is closer to real-time, and they are often used as real-time dashboards for Operations Executives.

BI may be able to provide the historical “slice and dice” data, but typically, not at the level of granularity required by operations managers. BI will not be able to provide the real-time early warning required by the plant. Both of these are thus needed to support manufacturing companies adequately.”
The challenges vendors have is how to deliver this operational information in a rapidly consumable form, with minor time and effort outside of operations. The system will need to evolve, with more operational questions answered out of the box, or an experience which enables operational people to answer these questions.