Smart Cook: Making cooking easier with multimodal learning

Abstract

Learning how to cook presents at least two significant challenges. First, it can be difficult for novices to find appropriate recipes based on the ingredients available in one’s pantry and/or refrigerator. Second, it can be difficult to focus on cooking tasks and following a recipe at the same time. In this poster, we present the design process and implementation of a system that uses deep learning to address the first of these two problems. Our initial design work focuses on streamlining the process of entering and tracking potential ingredients on hand and determining appropriate recommendations for recipes that utilize these ingredients. Here, we present the current state of our project, explaining in particular our contributions to minimizing the overhead of tracking kitchen ingredients and converting this inventory information into effective recipe recommendations using a multimodal machine learning approach.

Publication Title

UbiComp/ISWC 2019- - Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers

Share

COinS