Sunday, April 27, 2014

Momentum grows on Internet of Things in Industrial and Manufacturing Environment

As I enter another week of brainstorming workshops across Schneider Software, with thought leaders, we are working through innovations on technology, architecture and process. Centered around the “cloud” and “Internet of things” while we have offerings happening in both areas, we need shift from offerings to these technologies be a natural part of the industrial architecture and process of buying and applying an industrial solution. I also spoke with many people over the last 2 weeks across the world not about technology but about the solution challengers and it was clear that the three main challengers kept coming up:
  •  People and operational change of moving to teams which involved remote workers and virtual experts
  •  Agility to change production, and process quickly and efficiently
  •  Holistic control across the manufacturing industrial assets. This required the expansion to   assets that could be mobile or remote to traditional industrial plant, yet still are apart of Value Chain that they are accountable.
Accountability came up over and over again; this means different things from a safety of people to food safety so trace ability etc to efficiency and responsibility to the environment and impact. It was clear in all the discussions that a change in thinking of architecture and approach was needed but instead of its being a “dream” it is really possible today in a sustainable way using such technologies as Cloud and Internet of Things. The architecture below is an example of distributed, but needing a unified operational experience across roles across devices, and unifying devices.
Also during the week I came across two articles one from Microsoft and another from ARC on Hanover fair both supporting the increased momentum in the market for Internet of Things.


Key as MS  points out below the opportunity to investigate and improve your business by naturally looking beyond the traditional architectures, overcoming the concerns such as security, by implementing secure architectures and approaches “that does not mean isolation”.   
The Operational Transformation is here from a design point of view, again this is one of the technologies and approaches to be employed to achieve the operational workspace and efficiency required in the flat world.

Sunday, April 20, 2014

“Self Service” Data to Information is key to Industrial Analysts

I am on the road for a month always a good opportunity to speak with new people, and again the discussion of access to site information across sites and equipment becomes critical. This requires different tools and different approach , we have also different people involved.
Example I was at dinner with an engineer who runs installation/ tuning team for a wind turbine manufacturer. He talked about how his team follows the installation team, and goes in and sets up the turbine, while I expected the discussion to center on turning and setup of the turbine. The discussion was actually on how they set the data gathering equipment, and making sure the contextualization was in the data, so that he would be able to convert to information through contextualization. This was not a nice to have it was now a natural and critical part of the wind farm set up, as they leave the farm (which is usually in the middle of nowhere) he talked about the second phase of tuning, an analysis phase. This he does with his team anywhere but not at site; they capture the data, and start setting situations, and patterns. Applying past known conditions but he talked about doing this analysis across 10s of wind farms over 100s of turbines from all over the world. This engineer was unaware of what I do, and whom I work for, so the discussion was very candid and interesting to me as I had a mechanical engineer who is Gen Y, and just assumes that this information will be available, and to him it is the critical part of   installation to setup this data acquisition system, so he is empowered to work from where ever he is.
So the next question I tried to understand the type of analysis and clients, profile of people using the information. “Self Service” is the key, and the sense of discovery insight was key, using tools like trends, mat lab and  excel played in for analysis, but key was a set of tools spreadsheet models and analysts that they had built up over time. Notice this was not just talking reporting, dashboards but it was the discovery aspect, the ability to combine different data sources easily e.g. to compare like turbines.
Everything needed to be “plug and Play” allowing turbines to be added by not instrumented people, access to information, and the data from the turbine is not enough the information on the wind farm is key to provide the context situation the turbine is performing in. So orientation, terrain, and weather input for that site, both now, history and future is key. So merging data sources from sites, with other models etc., and then compare from sites to sites to improve and evolve.
But in the discussion it was clear that he and his team are ideal for not storing the data local but going to the cloud architecture, enabling these remote data source sites to gather and push to the cloud, combine with the weather data for that site already in the cloud, and then consume, discover and share from tools and models in the cloud. So the virtual team, virtual sites, can become unified and effective in the collaboration end evolution.   
Time for New Approach to Industrial Information  
It may have been Tom Davenport, noted professor, author, and analytics expert, who first came up with the terms descriptive, predictive, and prescriptive to describe the three stages of maturity for analytics use within an organization:
Descriptive Stage: What happened in the past?
Predictive Stage: What will (probably) happen in the future?
Prescriptive Stage: What should we do to change the future?
The first stage ("descriptive) is the traditional approach with trend analysis, simple tabular reports and dashboards into the descriptive bucket. When applied effectively, these technologies provide visibility into what happened - but only up to a point. Many companies are now also rapidly adopting a third class of descriptive solution, visual data discovery. The reason is simple - it can significantly improve the odds that managers and process engineers can find the right information at the right time.
But, a report of that type is never going to help answer some important questions that may arise, such as: "Why is work-in progress in progress for longer than in the past?” Likewise, an indicator on a dashboard could show current on-time delivery performance. In practice, dashboards are often more flexible than reports enabling users to drill down from summary information to detailed data. This can help managers understand cause and effect. But ultimately, users are still limited to answer those questions anticipated by an IT specialist when he or she first developed the dashboard. And that's the crux of the problem: most current BI solutions still largely depend on IT specialists to create new BI assets (such as reports and dashboards), or to modify existing ones.
Why does that matter? It matters because most decisions have a distinct "window of opportunity." In other words, after a certain point in time, any value to be had from making a decision just vanishes. For example, the opportunity to maximize a load demand, only exists while there is a window of demand, the ability to bring up a set of wind turbines in a timely manner, understanding the landscape across the wind farm, and the weather model for the next 24 hours. When the demand has gone the Window of opportunity and data need has gone, so if it took 20 hours to get that information the opportunity has been lost. In practice, all decisions have a point in time after which they are no longer relevant.
Clearly, something more is required. And for an increasing number of organizations that something is a visual data discovery tool. Visual data discovery tools provide a very visual workspace that encourages process analysis engineers, managers to manipulate data, hands-on. They provide an engaging experience to explore data freeform, with minimal or no help from skilled IT staff. Starting from the first glimmer of a problem/ opportunity users can investigate freely, follow their train of thought, and link cause to effect. That is exactly the type of capability required to furnish answers to unexpected questions – the type of questions that conventional reports and dashboards often struggle to answer.
Visual data discovery tools typically provide:

• Unrestricted navigation through, and exploration of, data, example search
• Rich data visualization so information can be comprehended rapidly
• The ability to introduce new data sources into an analysis to expand and follow it further These factors are at the core of self-service analytics.

As the manufacturing world grows increasingly fast-paced and dynamic, self-service analytics probably offered as a service online; that can be consumed from anywhere, “on boarded” fast, and certain tools only used as needed, will enable a more cost effective, reliable and powerful industrial analysis environment. It is clear to me domain engineers like my friend in Wind Turbines need a platform of tools to build their domain solutions, to deliver a “self-service “ domain solution for Wind turbine tuning, on boarding to really enable the ability to satisfy the dynamic world. As we move to “micro grids” where the expert decision makers with lots of experiences are not available, decisions and actions will need to be enabled through “Self Service” visual domain tools.

Sunday, April 6, 2014

Resetting the Way we do Automation/ Operational Projects!

A couple of weeks ago I talked about the 3rd generation MES and the shift to “model driven” in order to absorb change in the operational practices and this drives the shift to develop standards and roll them across plants as well the elimination of custom code.
 Again last week the realization that we need as engineering, and IT that we need to rethink the approach came through in discussions with 2 customers. The discussion really happened around two items:

1/ The speed at which projects in the automation world / operations world need to happen, it is halving as one plant engineering manager point it out. But in his second breath he stated that they no longer stable e.g. you can be sure that the business will drive operational change that will require that project to evolved twice in 12 months.
The discussion was interesting as one of the two in the discussion had grown up in the same environment as I and reflected on the fact that projects use to take a year, they were significant and they stayed in a stable condition for 5 years, this was the basic rule I also found when in Europe and Middle East in the 80s/90s.

2/  The second was the scope of responsibility and rollout, they both commented how they use to be in charge of a team that looked over one to 2 plants, now they run projects that span multiple sites often going outside the country.  The expectation is that the same capability will roll out over multiple sites at a decreasing cost, and decreased time to production. The same changes in the following 12 months must be rolled out, as well.

Combine this with the more complex projects as the level 2 and level 3 merge in the traditional automation levels, the traditional approach to project management and project evolution are not valid.
At the ARC conference in Orlando in February, this same message was echoed by leaders from Exxon, GM< and Nestle.
Sandy Vassar, Facilities I&E Manager, ExxonMobil Development Company used the phrase "it just happens" to indicate his team's goal for the automation portion in each of the more than 100 oil and gas projects now in various stages of planning and execution at the company.
He believes the industry needs "lean project execution," that separates the physical system from the software. Toward this end, the technology suppliers have to think differently and deliver technology in a way that allows the team to eliminate, simplify, and/or automate steps in the overall execution of automation.

He listed the top twelve challenges:
1. Eliminate, simplify, and or automate steps in the overall execution of automation
2. Minimize customer engineering and reduce the total amount of engineering
3. Shift the custom engineering to the software and rely on standard hardware components; progress hardware fabrication independent of software, design
4. Virtualize the hardware and prove the software design against the virtualized system
5. Prevent design recycle and hardware/software rework
6. Eliminate unnecessary automation components and standardize the remaining components so all systems look alike across projects
7. Eliminate or minimize the physical, data, and schedule dependencies with other disciplines
8. Simplify the configuration of interfaces with third-party packages
9. More easily accommodate changes, including late changes
10. Mitigate the effects of hardware and software version changes
11. Eliminate, simplify, and/or automate generation of required documentation
12. Challenge traditional approaches

None of this is unique or new, but like the conversations last week this forum confirmed that a “rethink” is needed, and cultural change. It is important to realize while product vendors must evolve more and more to provide platforms for reuse, and management of standards which provide and en environment for the capture of “Unique Operational Practices”  in a configuration environment that allows evolution and reuse. The engineering community within the companies and within the System Integrator partners need to also evolve to a more agile approach to projects. Compared with the traditional project approach of my site is unique and doing things in a 100% custom way.


The new generation will expect things in a shorter time and will compromise uniqueness for speed and agility! One leading company produced this diagram when they looked at their multi site projects in automation, and it is telling.

I doubt there is no argument on the outcomes, the graphs show 4 key findings (principles):

1/ Integration: across different PLCs and systems, across sites so there is one namespace, one alarms one history one abstraction platform. But integration must be trustworthy and sustainable.

2/ OO Object Orientated supervisory: This is key to have an object/ managed software approach with templates and management of standards at the supervisory and operational level. Key to providing consistent plant model, consistent information, and consistent experience in UI and action as well a platform for the evolution of the system over time.

3/ASM for UI and Alarms: the move to exception based awareness, vs monitoring.

4/ PLC Blocks: again to enable management and standards to allow absorption of change.

This is not new, yet I only see a few companies truly implementing new solutions in this way, which leaves me to ask how will the others deliver systems for the current and future “dynamic world”?