Showing posts with label Industrial Information. Show all posts
Showing posts with label Industrial Information. Show all posts

Saturday, November 7, 2015

Data Diodes for Levels 2-3 and 3-4 Integration

Blog entry by Stan DeVries.
Data diodes are network devices which increase security by enforcing one-direction information flow.  Owl Computing Technologies’ data diodes hide information about the data sources, such as network addresses.  Data diodes are in increasing demand in industrial automation, especially for critical infrastructure such as power generation, oil & gas production, water and wastewater treatment and distribution, and other industries.  The term “diode” is derived from electronics, which refers to a component that allows current to flow in only one direction.
The most common implementation of data diodes is “read only”, from the industrial automation systems to the other systems, such as operations management and enterprise systems.


This method is not intended to establish what has been called an “air gap” cybersecurity defense, where there is an unreasonable expectation that no incoming data path will exist.  An “air-gap” is when there is no physical connection between two networks.  Information does not flow in any direction.  Instead, the data diode method is used as part of a “defense in depth” cybersecurity defense, such as the NIST 800-82 and IEC 62443 standards.  It is applied to network connections which have greater impact on the integrity of the industrial automation system.

One-way information flow frustrates the use of industrial protocols which use the reverse direction to assure that the data was successfully received, and subsequently triggers failsafe and recovery mechanisms when information flow is interrupted.  A data diode can pass files of any format and streaming data such as videos and an effective file transfer, vendor neutral approach, in industrial automation is to use the CSV file format.  The acronym CSV stands for comma-separated values, and there are many tools available that quickly format these files on the industrial automation system side of the data diode, and then “parse” or extract data on the other side of the data diode.

There are 2 architectures which are feasible with data diodes, as shown in the diagrams below.
The single-tier historian architecture uses the industrial automation system’s gateway, which is typically connected to batch management, operations management and advanced process control applications.  This gateway is sometimes called a “server”, and it is often an accessory to a process historian.  A small software application is added which either subscribes to or polls information from the gateway, and this application periodically formats the files and sends them to the data diode.  Another small application receives the files, “parses” the data, and writes the data into the historian.
The Wonderware Historian version 2014 R2 and later versions can efficiently receive constant streams of bulk information, and then correctly insert this information, while continuing to perform the other historian functions.  This function is called fast load.

For L2-L3 integration, the two-tier historian architecture also uses the industrial automation system’s gateway.  The lower tier historian often uses popular protocols such as OPC.  This historian is used for data processing within the critical infrastructure zone, and it is often configured to produce basic statistics on some of the data (totals, counts, averages etc.)  A small software application is added which either subscribes to or polls information from the lower tier historian, and this application periodically formats the files and sends them to the data diode.  Another small application receives the files, “parses” the data, and writes the data into the upper tier historian.

The Wonderware Historian has been tested with a market-leading data diode product from Owl Computing Industries, called OPDS, or Owl Perimeter Defense System.  It uses a data diode to transfer files, TCP data packets, and UDP data packets from one network (the source network 1) to a second, separate network (the destination network 2) in one direction (from source to destination), without transferring information about the data sources.  The OPDS is composed of two Linux servers running a hardened CentOS 6.4 operating system.  In the diagram below, the left Linux server (Linux Blue / L1) is the sending server, which sends data from the secure, source network (N1) to the at-risk, destination network (N2). The right Linux server (Linux Red / L2) is the receiving server, which receives data from Linux Blue (L1).


The electronics inside OPDS are intentionally physically separated, color-coded, and manufactured so that it is impossible to modify either the sending or the receiving subassemblies to become bi-directional.  In addition, the two subassemblies communicate through a rear optic fiber cable assembly which makes it easy for inspectors to disconnect to verify its functionality.  The Linux Blue (L1) server does not need to be configured, as it accepts connections from any IP address. The Linux Red (L2) server, however, must be configured to pass files onto the Windows Red (W2) machine.  This procedure is discussed in section 8.2.2.6 of the OPDS-MP Family Version 1.3.0.0 Software Installation Guide.  The 2 approaches can be combined across multiple sites, as shown in the diagram below.  Portions of the data available in the industrial automation systems are replicated in the upper tier historian.

Sunday, February 23, 2014

Deployed Enterprise Historian in the Cloud Discussions with Actual Use Cases, Confirms the Opportunity for Leveraging the Cloud to Increase Uptime.

This week I had dinner with a client from the water industry who has just deployed Invensys's Enterprise Historian in the cloud. While I have had many discussions with potential users of historian in the cloud, but the opportunity to discuss 1 on 1 with one of the first implementers was too tempting.
He comes out of the water industry, and they deployed two systems, for two city water systems. They have tiered architecture with tier 1 historian on sites, feeding to the Enterprise Historian. They have used a local historian, with a 7 day storage, with the intent of using the Cloud Enterprise Historian as the analysis tool across the sites, for analysis.
I asked, " why had they gone to the cloud?"
The answer was due to increased reliability in the cloud vs hosting at the city Center and expecting someone to maintain it.
They have found over time the reliability of having a historian on site for long term data in cities is not effective, as the maintenance on the PCs/ servers, upgrading OS’s and management of archiving the data was not been done as regularly as required. With the increased regulations requiring data to be stored for at least 7 years, this risk needed to be addressed. A discussion of why this issue of historian maintenance was an issue and it came down to the Historian falling under the plant automation teams, and they did not see the PC’s and Servers as maintenance items, like they did say a pump. IT, on the other hand, would monitor the PC with performance measures to escalate issues.
So to resolve the issue the client has taken advantage of the new Enterprise Historian in the cloud offering from Invensys/ Schneider Electric where the historian is managed by Invensys/ Schneider Electric, in its cloud system working with Microsoft Azure. The uptime of the system is supplied by Microsoft with it’s Azure infrastructure, and data centers, combined with the expert managed services from Invensys / Schneider Electric. Who install and set up the system, monitor the system for data usage and archiving, and manage the operating System, and product upgrades?
Removing the whole management of the data from customer.
A series of clients hosted in the cloud are available for analysis by the user.
There was no question of security; it was assumed and believed that Cloud infrastructure is more secure than they can maintain on remote sites. This has been proven many times, and I ask people who doubt this to understand how secure your own managed historian is from an up time point of view and data security, especially as the breadth of users accessing the data increases.
It was nice to validate the original intent of building an Enterprise Historian in the cloud, and reaffirm the trend we seeing of the internet becoming a natural part of the industrial information architecture.
This case was interesting as it was new, but I suspect a year from now this will be common, as the challenge of maintaining historians and servers on remote sites, or in companies scaling back on plant engineers increases.  

The comment I heard in New Zealand a year ago “ why would I put a server on a plant site in water again” comes ringing back to my ears!