An artificial intelligence that interprets images from wearable cameras can identify food and accurately estimate its weight to determine how many calories a person is consuming and what nutrients they are ingesting, which could be useful for automating dietary research.

Benny Lo at Imperial College London and his colleagues asked 13 subjects to wear cameras around their chests or mounted on glasses to capture images at mealtimes. The cameras took photographs every few seconds and algorithms automatically discarded more than 90 per cent, which didn’t depict food.

The images left were  annotated by dieticians and the meals were weighed. The images and data were then used to train an AI known as a neural network to identify types food and estimate volume and nutritional content. The system continuously monitors subjects, so it can also determine how much of a meal was eaten rather than just recording the size of the meal served.

Read more: Low-carb diets seem to involve more calories than low-fat diets

Lo’s team then got the AI to analyse new images from the wearable cameras, and weighed the meals to compare them against the estimates. The computer did better than humans at estimating the calories being consumed – it had an error rate of 37.6 per cent compared with the human error rate of 48.8 per cent.

Nutritional research has often relied on people self-reporting the content of their meals, but this can yield poor data because of bias and memory slips, says the team. It is also labour intensive. Much research has been done on automatically estimating the number of calories in a meal from a photograph taken before eating but this doesn’t take into account any leftovers.

The team hopes that the system will aid research in low-income countries where malnutrition is a big issue. The experiments were run in poor lighting conditions to mimic the lighting that might be delivered by an inadequate electricity supply. Despite these measures, Lo says that some refinement of the neural network may be needed in real-world situations.

Reference:arxiv.org/abs/2105.03142