Congratulations to our REP Students for being a Champion team in IEEE INTUITION v6.0 Hackathon 2019

Name of the competition: IEEE INTUITION v6.0 Hackathon 2019

About the competition: iNTUition is NTU's only student-run 24 hour hackathon, organised by IEEE NTU Student Branch. In the time stipulated, teams are required to build something novel that they are passionate about. This edition is the 6th since its conception on 2014.

Date of the competition: 12 October 2019 and 13 October 2019
Achievement: 1st Place
Prize: $2500
REP Students: Stark Chen, Ong Xing Xiang, Gao Xinrui, Benjamin Chew
Product/Proposal: LiteBox

Description: Lite Box is a solution developed to target the issue of hygiene within the Food & Beverage (F&B) industry with the help of digital making and data analytics to classify the status of plates (dirty/clean). Furthermore, the team believes that there is great potential for the product in empowering the visually impaired and in the regulation of hygiene in the F&B industry. LiteBox consists of 3 parts:‚Äč

1. Light Box
2. Machine vision
3. Mobile application

The light box provides a stable platform and environment where the machine vision can take place, endowed with electronic components to make it smart. An ultrasonic sensor is positioned within to determine the presence of cookery, and consequentially switch on the LED lighting within to provide a well-lit environment to reflect the state of the cookery placed within.

For machine visioning, a pre-trained MobileNet image classifier was fine-tuned on an open sourced dataset from Kaggle containing dirty and clean dishes. This was then complimented by a set of images trained within our light box, to improve classification accuracy and precision. The model was then converted into Tensorflowlite to be incorporated into the Android app.


Coded with Android Studio(Java), the mobile application provides a platform that aggregates the above 2 parts of LiteBox in providing real time feedback on whether the cookery being inspected is clean or not. The results of the analysis from machine vision is displayed in prominent red and green, providing ease of identification to the visually impaired.

Share Article