What do we actually teach young students about digitisation?

Engage the experts to unleash their true innovation power

Recently I did a small survey among the students in my Chemical Reaction Engineering course. The young people are in their early to late 20s, Generation Y, well supplied with all the toys of modern communication technology. My homework is not being solved, it is being googled.

My question was about understanding mathematical and statistical methods.

  1. differential / integral calculus
  2. statistics (stochastics)
  3. regression analysis
  4. statistical methods for experiment design (DoE)
  5. big data / chemometrics

In the first two fields, the majority felt safe, even if no-one dared to give themselves expert status. The group already became more uncertain about regression analysis. The last two topics were almost unknown.
Now this small experiment is statistically not even clean enough to draw great conclusions, the sample was too small, too homogeneous.

Nevertheless I ask myself – what has actually changed in our training system since the last quarter of the previous century. It is the same weighting that I took with me from school and university. I still remember my fascination and enthusiasm when I attended the first lecture of Prof. MacGregor on the subject of chemometrics, in the autumn of 2000, it must have been. So what was possible? How much more is possible today, with 15 years more research, with much (much, much) more powerful computers? Everyone is talking about Big Data – but it seems to me that on the working level this has not really arrived yet. How do we imagine the practical application of these methods in chemistry when the majority of chemists and engineers who are now being trained do not even have a conceptual understanding of the basics of the methods. I had hoped that at least one of them would come and say: “Dr. Madl, of course we use modern statistical methods of experimental design. In our drug discovery laboratory we have two HTPE (high through put experimentation) robots, and in our plant all the process data are combined and processed for predictive process control. The two mentioned methods are not the innovations of the industry 4.0, they are methods that have been known for 10, 20 years.

So how do we imagine the next wave, the next big wave in the chemical industry? What will be the level of acceptance among employees, and among middle and senior management? Will a team of IT 4.0 consultants come and install the system? Will research and development simply employ a few IT experts in addition to (or instead of) new chemists? Or will you just have a Big Data department somewhere in India? And how can you imagine the integration, the collaboration?

I claim that Industry 4.0 is mainly in the minds. All the buzzwords about lifelong learning, we really have to take them seriously now. The people who are supposed to generate innovation in the chemical industry have to innovate themselves, they have to keep up with the latest findings, with the latest methods – in other words, progress – even after 10 or 20 years in the profession. And, above all, the universities, the training centres for the innovators of tomorrow, must constantly adapt their teaching canon to the requirements of the future.

I would hope that in the future, for all graduates of higher education in chemistry, the pointer for all areas will be far to the right in my survey, that they will all quite naturally use modern methods and software to generate knowledge as effectively as possible. After all, all the fuzz about Industry 4.0 and Big Data is intended to enable one thing above all: to have more time to think, with better data as a basis and powerful methods to subject the results of the thinking process to scientific experiments – in order to develop better products and processes more quickly.