Smart Nation Translational Lab is about innovation and using technology to improve lives, create economic opportunity and societal value for Singapore.
View our promotion video to learn more!
Big Data Analytics
Noise collection sensor nodes deployed in predefined locations analyse the sound in real time. Each sensor node will incorporate one or more computing engines capable of inferring noise class from real time sound data captured. The host server stores records of the inferred sound event by each individual sensor, along with complementary information such as event time stamp, angle of arrival and sound pressure level. A look-up table in the host server stores the configuration parameters for each sensor nodes, with each record associates one senor ID to its geo-location, its mounted orientation relative to the magnetic field of the earth and some optional node specific setup parameters. Simple visualization showing heat map plot of each sensor node where the colour represents the relative sound pressure level with the additional visual symbols representing the inferred noise class and direction-of-arrival. The sheer amount of sound data collected in the centralised database will be useful for further big data analytics.
Internet of Things, IoT
Evaluate and implement sensor solutions to collect sound, temperature, CO2, motion and humidity data with edge processing using wireless communication technologies like WiFi, LoRa, Bluetooth or GSM.
Every successful implementation of IoT is synonymous with low power devices. Generally such low power processors come with limited resources and therefore most implementation needs broad research and careful design including system architecture, coding optimization, processor scheduling, efficient communication protocol. Continuous research in more time efficient communication protocol and lower power consumption in algorithm implementation is essential to getting seamless integration of smart sensors around us.
Urban Sound Analytics
Inference of different sound classes using machine learning. Examples of sound class includes: human voice, car horn, vehicle, music, shouting and alarm.
Terabytes of raw sound data collected and classified into event and sound classes by machine learning and useful for big data analytics applicable for urban living. Much of this work is not trivial process, and data is local and unique to Singapore context. The research works involve cleaning and preparing the collected data for further consumption in machine learning research and applications into the urban noise issue.
Multitudes of research works in the world has explored different aspects of people’s perception towards noise. We have continuous research on developing and combining signal processing and machine learning engineering to collect, analyse and exploit sound and/or noise to improve day to day people’s living conditions. To name a few, our ongoing research works are noise classification, noise masking and subjective study of noise impact to people’s comfort.
Intelligent Edge Computing
Use of ARM/FPGA/Atom or any embedded device with sufficient computing power to perform inference of sound classes, sound level computation, direction of arrival, likelihood of fire or false alarm in the actual living conditions. With the advancement of digital computing technology and rapid growth of mass volume semiconductor production, embedded processors are getting very powerful in a very low cost device. These processors are commonly integrated with wireless communication such as Bluetooth and WiFi to enable rapid implementation of intelligent edge computing. Applications which were only feasible to run in the cloud or through server computing have progressively become available in edge computing. ARM processors have taken a lot of applications using its low power fashion and comparable computing power from areas used to be dominated by application specific IC (ASIC).
The integration of machine learning into embedded processors have been our keen research interest to enable more intelligent sensors/applications to run directly from the processors.
Audio event detection, sound level computation and direction-of- arrival metadata to cloud services using serverless application to aggregate records and web-server to host graphical post-analytics. The sheer amount of big data collected from deployed sound sensors has to be stored and easily accessed for further analytical research and consumption. This is where cloud platform is selected and chosen as the back end part the entire smart sensing ecosystem. Front end is represented by groups of edge intelligent sensors and back end is represented by cloud services to manage data storage, data transfer, data filtering and data retrieval. Sensors could be various transducers converting physical attributes such as sound, temperature, humidity, etc into digital number representation ready for signal processing. The rate of growth from available cloud services is tremendous, and therefore continuous study on getting the most efficient system and cloud architecture in differing urban applications is not trivial task.