Research Focus

Fundamental Research in Generative Models

🔬 Efficient Model Architectures
Exploring new transformer variants, diffusion models, and beyond to improve efficiency and performance.

🔬 Scalable Multimodal Foundation Models Developing large-scale models that integrate vision, language, speech, and other modalities.

🔬Self-Supervised and Few-Shot Learning
Enabling generative models to learn with minimal supervision and achieve promising few-shot generalization across different tasks.

🔬 Interpretability and Explainability
Understanding how generative models make decisions through theoretical discoveries and empirical studies.

Trustworthy Generative AI

🛡️ Stability
Ensuring generative models produce reliable and consistent outputs, especially in highly-risk applications such as healthcare and finance.

🛡️ Fairness and Bias Mitigation
Identifying and reducing biases in generative AI models to enhance fairness.

🛡️ Robustness and Safety
Preventing adversarial attacks, hallucinations, and unsafe outputs.

🛡️ Ethical AI and Regulatory Compliance
Designing frameworks to align generative AI with ethical principles and legal regulations.

🛡️ Controllable Generation
Implementing control mechanisms to ensure that generated content aligns with user intentions.

Generative AI in Creativity and Collaboration

🎨 AI for Art, Music, and Literature
Developing generative models to assist or co-create with human artists, musicians, and writers.

🎨 Interactive and Co-Creation Systems
Building AI companions that collaborate with humans in real-time creative processes.

🎨 Synthetic Media and Digital Avatars
Enhancing virtual humans, metaverse applications, and AI-generated films/games.

Physics-Informed Generative AI

⚛️ AI-Augmented Scientific Simulations
Developing generative models that incorporate physical laws to enhance simulations in fluid dynamics, quantum mechanics, climate science, and material science.

⚛️ AI for Learning Large-Scale Quantum Systems
Leverage AI to efficiently certify, benchmark, and characterize large-scale quantum systems beyond the classical computational reach.

⚛️ Hybrid AI-Physics Models
Integrating deep learning with traditional numerical solvers to improve accuracy, efficiency, and generalizability in physics-based modelling.

⚛️ Symbolic and Data-Driven Discovery
Using generative AI to uncover new physical laws, equations, and scientific hypotheses from experimental data.

⚛️ Inverse Design and Optimization
Leveraging generative models for automated discovery of optimized structures, materials, and engineering solutions.

⚛️Uncertainty Quantification in Scientific AI
Enhancing the reliability of AI-driven predictions by quantifying uncertainties in physics-based simulations.

Generative AI for Knowledge Discovery

📚 AI for Scientific Research Assistance
Developing generative models to aid in hypothesis generation, literature summarization, and automated experiment design; and accelerating the pace of scientific breakthroughs by analysing vast datasets and generating novel insights.

📚 Automated Knowledge Synthesis
Leveraging generative AI to extract meaningful patterns from large-scale, unstructured data; and to summarize research articles, constructing knowledge graphs, and identifying emerging trends in scientific and technological advancements.

📚 Multilingual and Cross-Cultural AI
Designing generative models for contextualization of complex scientific, technical, and policy-related knowledge; and facilitating international collaboration by breaking language barriers and improving the dissemination of cutting-edge research.

Scalable and Sustainable AI Systems

🌱 Efficient Training and Inference
Reducing the computational and environmental footprint of large generative models.

🌱 Decentralized and Federated Generative AI
Enabling privacy-preserving and distributed training for generative models.

🌱 AI for Climate Science and Sustainability
Using generative AI to optimize renewable energy systems and environmental monitoring.

 

Explore Our Site