Funded Research Projects
Co-Principal Investigator, Towards Highly Interactive Distributed Media Environment, NRF, Dec 2007 – Nov 2010, S$954,000.00
In this project, we aim to investigate some fundamental techniques in creating highly interactive, distributed media environments. To facilitate meaningful interactions among the users in such a media environment, the two basic QoS (quality of service) requirements on the media environment are the interactivity and consistency of the environment. Thus, the central theme of this project is to investigate how to maintain the interactivity and consistency of a large-scale, highly interactive media environment under the constraints of large network latency and huge resource demands. Three fundamental problems, namely, the server provisioning and placement problem, the zone mapping problem, and the update scheduling problem will be investigated under this project. We plan to design some computationally efficient approximation algorithms to solve these problems.
Principal Investigator, COSMOS: CrOwd Simulation for Military OperationS, DSTA, Nov 2006 – Oct 2009, S$1,092,385.
Crowd control becomes increasingly important in urbanized military operations such as peace keeping, riot control, disaster management, emergency evacuation, and rescue operations. Given the military challenges and risks imposed by the crowds, there is an urgent need to develop a system for military personnel to get prepared for handling various situations, to formulate strategies and answer “what-if” scenarios, and to evaluate hundreds of contingency plans so as to prioritize resources and time during an operation. One way to do so is to create a synthetic virtual environment and use Modelling & Simulation (M&S) techniques to emulate urbanized military operations. Crowd modelling and simulation is an essential component of such an environment. This project aims to develop a generic system for CrOwd Simulation for Military Operations (COSMOS) with the integration of game AI, distributed simulation technologies, and computer graphics and animation. This project is divided into the following five Research and Development Tasks (RDTs): RDT1 Behaviour Representation and Cognitive Models, RDT2 Ontology and Knowledge Repositories, RDT3 Agent-based Simulation Architecture, RDT4 Symbiotic Simulation, and RDT5 Animation and Visualization.
Co-Principal Investigator, An Integrated and Adaptive Decision Support Framework for High-Tech Manufacturing and Service Networks, A*STAR IMSS Project, April 2006 – April 2009, S$615,545.
The objective of our research programme is to investigate how design, analysis, enhancement and implementation of critical business processes in a manufacturing and service network could be realized using one single simulation/application framework. In the pilot phase several critical research issues were addressed with regard to the required interoperation mechanisms, grid computing infrastructure, symbiotic interaction between decision support components and physical systems, and synergy between simulation-based and optimisation-based techniques. Future work in the full-fledged project will look at issues such as synchronisation between applications incorporating different time-advance mechanisms, new methods to reduce the search space of a simulation optimisation task, automatic adaptation and validation of simulation models, methods for collaborative sharing of distributed data, data/functional decomposition, distributed scheduling and task management in a grid environment, as well as business metrics and process enhancement methodologies associated with the realisation of this framework.
Collaborator, Large Scale
Distributed Simulation on the Grid, International e-Science
"Sister" Project (
The proposed sister project aspires
to bring together a combination of strong expertise in distributed simulation,
grid computing and agent-based systems and to advance the field of grid aware
large scale distributed simulation. MeSC, the
Collaborator, Development of e-Engineering Infrastructure Using Grid Technologies for Efficient Management and Secured Access of HPC Resources, IHPC Collaborative Project, 2004-2006, Manpower Fund.
A Grid is a hardware and software infrastructure that provides flexible, secure, coordinated resource sharing among dynamic collections of individuals and institutions, referred to as virtual organizations. It has been coined as tomorrow’s supercomputer. The development of the Grid technologies is mainly driven by the large-scale science and engineering applications that require huge amounts of resources. This collaboration project aims to develop an e-Engineering infrastructure with a pool of Grid services for efficient management of and easy & secured access to HPC resources. The research and development will mainly focus on Grid services including Information Service, and Meta-scheduling & Resource Allocation Service. Existing Grid portals and Grid services like Grid security services will be adopted.
Co-Principal Investigator, A
Framework for Large-scale Grid Enabled Distributed Simulation,
Modeling and simulation permeates all areas of science and engineering and an increasingly complex simulation system often requires huge computing resources and data sets that are geographically distributed. The advent of grid technologies enable the use of distributed computing resources and facilitate the access of geographically distributed data. However, High Level Architecture (HLA) for modeling and simulation, and grid computing have been developed separately. The first attempt to combine HLA and Grid technologies was based on non-standard grid protocols. The main aim of this project is to develop a framework to manage large-scale, Grid-enabled HLA-based simulations using the proposed Open Grid Service Architecture standard. A major deficiency of HLA is that it does not provide any mechanisms for managing the resources where the simulation is being simulated. To support the execution of large HLA-based distributed simulations over the Grid, we propose to address this deficiency by developing the following additional services: dynamic naming services, load management services, and fault-tolerant services.
Principal Investigator, Consistency in Large Scale Distributed Virtual Training Environments, DSTA, 2002-2004, S$232K.
Quality of Training (QoT) is very important in large scale distributed virtual training environments (e.g., virtual battlefields simulation). A bad QoT may result in loss of money invested on the training system, loss of time of the military personnel who underwent the training and, worst of all, wrong decision-making. As a fundamental problem for all distributed systems, consistency is especially important for large scale distributed virtual training environments. This research supports the development of distributed virtual training environments that offer an improved QoT by minimizing inconsistencies in the simulation applications. In a large scale distributed virtual training environment, due to message transmission delay over the network and clock asynchrony among different simulation nodes, a consistent view among different participants is not guaranteed automatically, and various inconsistencies may thus occur. In this project, two types of consistency problems are studied: causality violation and time-space inconsistency – a unique consistency problem of large scale distributed virtual environments. To eliminate causality violations, a causality-based Time Management (TM) mechanism will be developed. To tackle the time-space inconsistency problem, a metric, a model and the protocol associated with the model will be investigated. In addition, a middleware solution for implementing the TM mechanism and the protocol will also be developed.
Collaborator, Technology for Secure and Robust Distributed Supply Chain Simulation, SIMTech Collaborative Research Project, 2001-2003, Manpower Fund.
To realize High-Fidelity Supply Chain Optimization through the adoption of HLA standard, several technical issues are addressed in this project: a) Sensitive data encapsulation and protection - the HLA standard does not provide any form of security mechanism that validates the identity of the participants that join the simulation, and protects the data exchanges among participants. b) Reliability - the HLA standard does not address the fault-tolerance issue of simulation. Multiple simulations running at different locations are liable to failure (for example network failure). c) Synchronization - the HLA standard has addressed the issue on synchronizing the time advancement of each simulation module (federate). d) Simulation cloning - simulation is a technique for performing "what-if" analysis. Having the ability to spawn off a simulation from a decision point to evaluate alternative scenarios is crucial to improve the productivity of using simulation for analysis. e) Web enabled visualization and interactive control of a simulation - some means of supporting interactive control of a simulation via a visualization frontend is important to improve the productivity and effectivenss of using simulation for "what-if" analysis.
Principal Investigator, Distributed Simulation: Scalability, Interoperability & Applications, NSTB Broadband21 Fund, 2001-2003, S$261K.
In this project, we propose an hierarchical RTI implementation architecture to tackle
the scalability and interoperability issues of the RTI. Our approach differs
from previous work in that it is based on a hybrid
architecture for interoperability between federations. The HLA is widely
applicable across a full range of simulation application areas, including
education and training, analysis and entertainment. To demonstrate the
capability of the hierarchical RTI implementation architecture, we will study
applications that are closely related to
Principal Investigator, Algorithms and Methods for Distributed Interactive Simulation, Academic Research Fund, 1998-2001, S$82K.
In DIS, simulation entities are physically distributed. For a large scale simulation, interacting with other simulation entity and updating simulation states may generate a large amount of communication. One important issue in DIS is, therefore, to reduce the volume of communication. Communication latency and asynchronous execution of the simulation on different machines may cause data and events not to arrive in the order that they were generated. So, another important issue in DIS is to maintain a temporal correlation in the simulation. Hence, in this project, we will study time and data management in DIS, aiming to meet the real-time constraints for executing large scale interactive simulation on proportionally large scale distributed systems.
Collaborator, Parallel &Distributed Simulation of Virtual Factory Implementation, NSTB RIC-University Research Fund, 1997-2001, S$1492K.
This project has the objective of improving the execution performance of simulation models of large scale manufacturing systems through the use of parallel and distributed processing. The faster execution will support improved decision making in time constrained applications. The project will aim to develop (i) a capability to execute virtual factory simulations in reasonable amount time; (ii) a capability for bottom up validation of manufacturing system integration; (iii) generic environment for Parallel And Distributed Simulation (PADS) for virtual factory implementation; (iv) the PADS technology for large scale simulation applications.
Co-Principal Investigator, Multiple Knowledge Sourcing Approach to Translation, NSRC RIC-University Research Fund, 1995-1998, S$91K.
A data-exploration type of approach, called Corpus-based Statistical (CBS) approach, can be used to acquire some knowledge automatically and thus, lighten the load on the human expert in the process of natural language processing. However, as most other data-exploration type of approach, CBS approach also suffers the drawback of requiring extremely huge computational resources, both in terms of computation time and memory. For example, many of the statistical optimisation processes require even months to run. Thus, parallelisation of such processes becomes a necessary step to make knowledge acquisition fast and hence, feasible. This project deals with the issues of how to implement parallel (fast) statistical optimisation algorithms to acquire some of the necessary knowledge automatically. The work will be carried on a cluster of workstations using PVM, IBM SP2 parallel computer, and on CRAY T94 supercomputer.
Collaborator, A Fault Tolerant, Variable Architecture Parallel Computing Platform, Academic Research Fund, 1994-1999, S$230K.
This project proposes to implement a novel parallel computing platform which varies the topology of the multiprocessor network during task execution according to the communication patterns and the occurence of both processor and link faults. Designing fault-tolerance into such an architecture enhances the availablility of the platform towards computationally intensive, safety-critical and mission-critical application, enabling these applications to achieve a higher probability of completion.