Cloud vs On-premise Software for Pulp & Paper
Industries and Organizations today have a constant need to innovate, scale and be flexible to meet the ever-changing needs.
Model Predictive Control (MPC) is a versatile and a widely used for model-based control approaches, which involves an online optimization of the control strategy over a pre-determined predictive receding horizon. A central limitation of the traditional MPC online optimization is that it requires a relatively inexpensive models. As a result, linear and non-linear (quadratic) approximations to the plant-models are considered - unless, of-course an explicit model in the form of a differential equation is readily available. The non-linear modeling presents a computation challenge, that requires one to solve nonlinear programming problems online. This works fine for relatively low-dimensional systems. But, when the system is high-dimensional, non-linear and exhibits a multi-scale dynamical nature, such an approximation fails to capture the system dynamics, resulting in a poor control performance.
Addressing this challenge, recent studies have focused on “black-box'' or “data-driven'' modeling approach using deep learning techniques such as Neural Nets, RNN (references), which can accurately model complex-phenomenon. The accessibility of high-end GPUs along with state-of-the-art Machine Learning frameworks such a Tensorflow, PyTorch etc. have also enabled the deployment of deep learning applications on cloud platforms.
However, fundamental issues surrounding data privacy, communication bandwidth, etc. are now driving AI from Cloud to Edge Devices. The latter has a tight constraint on resources such as memory, power and computing horsepower, which makes them unsuitable for storing, processing and training / re-training large, complex neural networks. In practice, one usually encounters a degradation of the AI performance over time, which could be a result of a significant change in the plant operating conditions or the raw material, etc., necessitating a periodic re-training/ model update to current situations. Performing this operation on the Edge device is not suitable, given its computational limitations and doing so manually is time and labor intensive job.
The Data-Science Team at HABER, in close collaboration with the Embedded Systems Programming Team and the Cloud Program Control Team built an end-to-end pipe line to address this challenge.
One of the top Mechanical Pulping mills in India wanted to optimize the chemical dosages and reduce the output variation so as to maintain the quality of the final pulp. Having gone through the exercise of building an algorithm to optimize chemical dosages at various stages, a central challenge, given the limitations of the Edge Device, monitor and update the AI performance periodically, so as to arrest possible model drift, maintaining the accuracy.
There are two central issues one encounters with deploying AI at large process units:
These imply that a frequent model update/re-training, though necessary, cannot be initiated on the Edge device.
The Data-Science Team at HABER, in close collaboration with the Embedded Systems Programming Team and the Cloud Program Control Team designed a state-of-the-art algorithm, to monitor and control the AI performance in real-time and detect performance degradation and other (fundamental) changes in the standard operating parameters of the processing unit early on. Upon detecting a such a change, the internal parameters of the AI are suitably altered to give the best possible accuracy, and the updated AI is then deployed back on to the Edge Device in real-time, without affecting the process.
This end-to-end cycle completely eliminates the need for any manual intervention, while maintaining the model accuracy and therefore also a strict quality control on the end-product.