Oil refineries and petrochemical plants are designed to run continuously, usually for years at a time, to maintain critical production obligations and achieve financial goals. Consequently, problems disruptive enough to interrupt production or impair product quality are very serious and must be avoided. Disruptions can be related to equipment failure, such as a motor burning out, or stem from a process problem beyond operators’ ability to correct.
Plant managers look for ways to determine when a problem is beginning to develop, before it escalates into an unscheduled outage. For equipment, trouble brewing can be spotted with basic diagnostic sensors, such as bearing noise monitors, which can warn the maintenance department when a critical parameter is changing.
Process problems are often more subtle but can be just as disruptive, as we’ll discuss in two examples later in the article. In these situations, some process variable begins to move into dangerous territory and can’t be detected by the automation system, leaving operators to determine corrective action.
The question emerges, was there anything in the production data prior to the incident that pointed to the situation developing? Operators and process engineers may pore over data for hours looking for clues, only to throw up their hands, spurring a search for new technologies to provide a solution.
Help from AI
Artificial intelligence (AI) is the topic of many discussions these days with suggestions as to how it will improve manufacturing in the future. A more practical approach is exploring what AI is doing right now to solve problems, as described below.
Human operators rely on the automation system to show them the overall state of the process. This should help them identify when there are signs of abnormalities developing and the cause. In reality, the automation system shows operators a wide range of variables, leaving them to make their best interpretation. If a problem occurs, operators must respond quickly and correctly to forestall an incident. But what they really want is a system capable of indicating the state of the process and identifying which sensor to focus on when an error occurs.
AI can identify the main factors contributing to abnormal situations and point out the specific sensors indicating the causes. This allows operators to concentrate on a small number of manageable elements to solve the problem, rather than trying to deal with a much large number of sensors, most of which do not relate to the immediate problem.
This process begins with operators identifying a specific problem. Data scientists work with domain experts—the process engineers—to build a system incorporating a learning model (Figure 1). This combination of artificial and human intelligence (AI+HI) works with an observe-orient-decide-act (OODA) loop as an effective decision-making procedure:
- Observe—intelligent sensing
- Orient—advanced analytics
- Decide—real-time and astute decision making
Act—agile actions for value creation.
Observe includes identifying the problem and setting the goal, defined as the process state in which the problem will be been solved. This calls for narrowing down the process data and maintenance information necessary for analysis, and then translating the problem into specific tasks to set issues.
Orient determines which direction the analysis proceeds to solve the defined tasks. It combines AI technology, domain knowledge, and data scientists’ expertise to dig into the data for analysis.
Decide examines what actions are suggested by the analysis results. If directions seem to be going off on a tangent, plant personnel go back to the first step and see if the problem is defined correctly. The participants must agree on whether or not to implement a given plan.
Act puts the consensus plan into effect. This may involve adding capabilities such as edge computers, cloud computing, and data storage. If it proves necessary to redefine the tasks, plant personnel must return to the first step and reconsider the overall approach.
An ethylene producer contacted Yokogawa and asked for assistance in solving a list of reoccurring process-related problems. Working with the company’s internal problem-solving team, all the participants engaged in a workshop to become familiar with the methodology and understand how the project would proceed.
Together, the team identified possible causes for each problem from hundreds of sensor parameters. The parameters were used to monitor the operational state of the equipment and create an AI model to detect anomaly in the equipment and understand the plant status. This methodology was applied to eight projects, we will look at two.
Case 1: Benzene Production Reactor
An ethylene plant produces cracked gasoline, which is converted to benzene by adding hydrogen in a reactor with a catalyst. This also removes impurities. To maintain a stable reaction, it is necessary to modulate hydrogen flow and reaction temperature to match raw materials.
The catalyst deteriorates gradually, resulting in lower catalytic performance. However, since there is no method to quantify activity, catalysts are periodically activated or replaced following a schedule based on run time or a calendar. This means that some catalysts are replaced even though they are still serviceable, creating extra maintenance.
Conversely, unexpectedly rapid deterioration of the catalyst ahead of replacement time causes an increase in impurity levels, resulting in defective and unsellable product. This creates several major problems:
- Reduced production with missed targets
- Loss of raw materials
- Increased cost of disposal
- Shutdown for emergency maintenance.
The observe phase concluded the operators needed KPIs for catalyst activity because without it they couldn’t tell when conditions called for a catalyst change prior to costly production problems. This knowledge would optimize maintenance and avoid product losses.
The orient phase examined production data for the previous two years, during which time there were three catalyst changes: two following the normal schedule, and one emergency replacement due to production problems.
Process data during stable operation and the time just before the emergency maintenance became training data, and was analyzed by AI to create a training model, which was then applied back to the data. This resulted in a catalyst health index, formed from a synthesis of multiple measured process variables identified by the AI analysis, which became the very KPI the operators needed.
When the two years of data was examined using this index (Figure 2), it became clear that catalyst health could be determined, and it was possible to decide when it needed to be replaced prior to product degradation.
The decide and act phases became clear. Operators now follow the catalyst health index in real time and schedule catalyst changes based on condition. This results in maximum catalyst life while avoiding product degradation and emergency shutdowns.
Case 2: Cracking Furnace Cooling Tower
In ethylene plants, ethane, naphtha, and other raw materials are heated in a cracking furnace. To prevent excessive cracking, the hot gas moves to a cooling tower, where cold water is sprayed into the stream to reduce the temperature to <35 °C. The heated water is sent to various heat exchangers in the plant and eventually recirculated.
During the summer, the cooling tower seemed to lose capacity, making it difficult for operators to control the process. The cracking reaction continued and impeded separation of the desired components, causing poorer product quality and yield. For several years, this situation was reluctantly tolerated as an unavoidable seasonal effect, at least until the problem unexpectedly vanished during the summer of 2019. There were no obvious climatic reasons, so the engineering team wanted to find out what had changed, and how to keep the problem from returning.
The observe phase set the two-part goal: from plant data, establish an indicator of an operating condition causing the loss of cooling effectiveness and identify a parameter closely related to this temperature rise. By changing the parameter, operators could improve operation.
The orient phase examined data from the previous two years, 2018 and 2019 (Figure 3a), when the temperature increased and stayed flat, respectively. Whatever happened in 2019 solved the problem, but no one could positively identify the specific change.
The AI analysis used to build the training model suggested several candidate parameters, including cooling tower temperature and cooling water flow rate. The analysis found these parameters affected the temperature and flow rate of the heat exchanger adjacent to the cooling tower, and both were closely related to the temperature in the cooling tower.
The training model created an index capable of predicting the effectiveness of the cooling tower. This was a synthesis of process variables, both upstream and downstream from the cooling tower itself. The higher the index, the less likely there would be conditions capable of causing loss of cooling capacity and product problems. The pleasant surprise of improved temperature control experienced in the summer of 2019 became reproducible at will.
Understanding what AI is and what it can do is difficult to pinpoint because it takes so many forms. Within process manufacturing, it can help solve many types of problems because it is a methodology as much as a technology. It requires engagement between HI and AI as illustrated in these examples. AI becomes the tool to extend the capabilities of HI once the human domain experts define the root problems to be solved.
AI Going Forward
Today, AI in process manufacturing is limited to analysis, rather than a primary method of real-time process control. However, this is changing. Yokogawa and others have been involved in experiments to replace traditional PID-loop-based control strategies with a single comprehensive AI system able to learn how to optimize control of a single process unit or an entire refinery. After all, if AI techniques can control a self-driving car, why not an ethylene plant?
The answer is, it can and will. Yokogawa has introduced machine learning as a practical technology for process manufacturing, and leveraging the results, is accumulating application cases in laboratory settings and actual facilities. This requires developing technology to ensure safety and versatility, with key abilities, including:
- Safe learning methods
- Robust ability to handle disturbances
- Fast response to changes in set points
- Continuous online learning
- Applicability and transferability of models to multiple facilities.
The goal is an AI-based control system able to deal with challenges quickly and effectively to improve process performance and plant operations.
All figures courtesy of Yokogawa
About the Author
Dr. Hiroaki Kanokogi is a general manager in the Yokogawa Products Headquarters at Yokogawa Electric Corporation. After graduating from the University of Tokyo, he developed machine learning for natural language processing at Microsoft. He has been researching on industrial use of AI technology at Yokogawa Electric since 2007.