From Hokkaido University 30/08/23

Artistic depiction of machine learning analysis of chemical mixture ratios. Credit: Yasuhide Inokuma

Machine learning model provides quick method for determining the composition of solid chemical mixtures using only photographs of the sample.

Have you ever accidentally ruined a recipe in the kitchen by adding salt instead of sugar? Due to their similar appearance, it’s an easy mistake to make.

Similarly, checking with the naked eye is also used in chemistry labs to provide quick, initial assessments of reactions; however, just like in the kitchen, the human eye has its limitations and can be unreliable.

To address this, researchers at the Institute of Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University led by Professor Yasuhide Inokuma have developed a machine learning model that can distinguish the composition ratio of solid mixtures of chemical compounds using only photographs of the samples.

The model was designed and developed using mixtures of sugar and salt as a test case. The team employed a combination of random cropping, flipping and rotating of the original photographs in order to create a larger number of sub images for training and testing.

Comparison of manual (top) and machine learning (bottom) methods for mixture evaluation. Credit: Yuki Ide, et al. Industrial and Engineering Chemistry Research. August 23, 2023

This enabled the model to be developed using only 300 original images for training. The trained model was roughly twice as accurate as the naked eye of even the most expert member of the team.

“I think it’s fascinating that with machine learning we have been able to reproduce and even exceed the accuracy of the eyes of experienced chemists,” commented Inokuma. “This tool should be able to help new chemists achieve an experienced eye more quickly.”

After the successful test case, researchers applied this model to the evaluation of different chemical mixtures. The model successfully distinguished different polymorphs and enantiomers, both of which are extremely similar versions of the same molecule with subtle differences in atomic or molecular arrangement.

Distinguishing these subtle differences is important in the pharmaceutical industry and normally requires a more time-consuming process.

The model was even able to handle more complex mixtures, accurately assessing the percentage of a target molecule in a four-component mixture. Reaction yield was also analyzed, determining the progress of a thermal decarboxylation reaction.

Members of the research team at the Institute for Chemical Reaction Design and Discovery (WPI-ICReDD), Hokkaido University. Back row, left to right: Yasuhide Inokuma, Sheng Hu, Yuki Ide. Front row, left to right: Yuya Inaba, Hayato Shirakura, Taichi Sano. Credit: WPI-ICReDD

The team further demonstrated the versatility of their model, showing that it could accurately analyze images taken with a mobile phone, after supplemental training was performed.

The researchers anticipate a wide variety of applications, both in the research lab and in industry.

“We see this as being applicable in situations where constant, rapid evaluation is required, such as monitoring reactions at a chemical plant or as an analysis step in an automated process using a synthesis robot,” explained Specially Appointed Assistant Professor Yuki Ide.

“Additionally, this could act as an observation tool for those who have impaired vision.”

More info

Paper

You may also be curious about:

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our weekly newsletter

Recieve the latest innovation, emerging tech, research, science and engineering news from Superinnovators.