• 2024
    May. 2
    Two papers from MIG were accepted in ICML 2024.
    Two papers from MIG: Planning, Fast and Slow: Online Reinforcement Learning with Action-Free Offline Data via Multiscale Planners, Bayesian Design Principles for Offline-to-Online Reinforcement Learning, have been accepted by ICML 2024.
  • 2024
    May. 2
    Paper from MIG was accepted in IJCAI 2024.
    Paper from MIG: STAR: Spatio-Temporal State Compression for Multi-Agent Tasks with Rich Observations, has been accepted by IJCAI 2024.
  • 2024
    Jan. 20
    Five papers from MIG were accepted in ICLR 2024.
    Five papers from MIG: Towards Robust Offline Reinforcement Learning under Diverse Data Corruption, Efficient Multi-agent Reinforcement Learning by Planning, Leveraging Hyperbolic Embeddings for Coarse-to-Fine Robot Design, Stylized Offline Reinforcement Learning: Extracting Diverse High-Quality Behaviors from Heterogeneous Datasets, Imitation Learning from Observation with Automatic Discount Scheduling, have been accepted by ICLR 2024.
  • 2024
    Jan. 20
    Paper from MIG was accepted in AAMAS 2024.
    Paper from MIG: IOB: Integrating Optimization Transfer and Behavior Transfer for Multi-Policy Reuse, has been accepted by AAMAS 2024.
  • 2023
    Sept. 22
    Two papers from MIG were accepted in NeurIPS 2023.
    Two papers from MIG: Conservative Offline Policy Adaptation in Multi-Agent Games, Unsupervised Behavior Extraction via Random Intent Priors, have been accepted by NeurIPS 2023.
  • 2023
    Sept. 22
    Paper from MIG was accepted in TMLR.
    Paper from MIG: A Survey on Transformers in Reinforcement Learning, has been accepted by TMLR.
  • 2023
    Sept. 22
    Paper from MIG was accepted in IROS 2023.
    Papers from MIG: Learning to Solve Tasks with Exploring Prior Behaviours, has been accepted by IROS 2023.
  • 2023
    Apr. 25
    Three papers from MIG were accepted in ICML 2023.
    Three papers from MIG: Offline Meta Reinforcement Learning with In-Distribution Online Adaptation, Symmetry-Aware Robot Design with Structured Subgroups Inbox, What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL, have been accepted by ICML 2023.
  • 2023
    Jan. 22
    Paper from MIG was accepted in ICLR 2023.
    Paper from MIG: The Provable Benefit of Unsupervised Data Sharing for Offline Reinforcement Learning, has been accepted by ICLR 2023.
  • 2022
    Nov. 26
    Paper from MIG was accepted in AAAI 2023.
    Paper from MIG: Flow to Control: Offline Reinforcement Learning with Lossless Primitive Discovery , has been accepted by AAAI 2023.
  • 2022
    Sept. 15
    Six papers from MIG were accepted in NeurIPS 2022.
    Six papers from MIG: Non-Linear Coordination Graphs , Safe Opponent-Exploitation Subgame Refinement , RORL: Robust Offline Reinforcement Learning via Conservative Smoothing , Low-Rank Modular Reinforcement Learning via Muscle Synergy , CUP: Critic-Guided Policy Reuse , Latent-Variable Advantage-Weighted Policy Optimization for Offline Reinforcement Learning , have been accepted by NeurIPS 2022.
  • 2022
    May. 15
    Three papers from MIG were accepted in ICML 2022.
    Three papers from MIG: On the Role of Discount Factor in Offline Reinforcement Learning , Self-Organized Polynomial-Time Coordination Graphs , Individual Reward Assisted Multi-Agent Reinforcement Learning , have been accepted by International Conference on Machine Learning (ICML) 2022.
  • 2022
    Jan. 21
    Four papers from MIG were accepted in ICLR 2022.
    Four Papers from MIG: Context-Aware Sparse Deep Coordination Graphs , Active Hierarchical Exploration with Stable Subgoal Representation Learning , Rethinking Goal-Conditioned Supervised Learning and Its Connection to Offline RL , Offline Reinforcement Learning with Value-based Episodic Memory have been accepted by International Conference on earning Representations (ICLR) 2022.
  • 2021
    Dec. 2
    Paper from MIG was accepted in AAAI 2022.
    Paper from MIG: Multi-Agent Incentive Communication via Decentralized Teammate Modeling , has been accepted for presentation at the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22). This year AAAI received a record 9,251 submissions, of which 9,020 were reviewed. Based on a thorough and rigorous review process, AAAI has accepted 1,349 papers. This yields an overall acceptance rate of 15%.
  • 2021
    Oct. 14
    Jianhao Wang and Lulu Zheng were awarded with National Scholarship (top 1%) at Tsinghua University.
    Jianhao Wang, a PhD candidate from MIG group, was awarded with National Scholarship (top 1%) at Tsinghua University, due to his impressing scientific research achievements. Lulu Zheng, a master candidate from MIG group, was awarded with National Scholarship, too. National Scholarship is the most important award in Tsinghua University and this year only 5 graduate students obtained this scholarship in Institute for Interdisciplinary Information Sciences (IIIS). Till now, there are 5 students in total from MIG group who has won National Scholarship.
  • 2021
    Sep. 29
    Six papers from MIG were accepted in NeurIPS 2021.
    Six papers from MIG: Celebrating Diversity in Shared Multi-Agent Reinforcement Learning , Offline Reinforcement Learning with Reverse Model-based Imagination, On the Estimation Bias in Double Q-Learning , Episodic Multi-agent Reinforcement Learning with Curiosity-driven Exploration, Towards Understanding Cooperative Multi-Agent Q-Learning with Value Factorization, Model-Based Reinforcement Learning via Imagination with Derived Memory, were accepted at NeurIPS 2021. The topics of these papers are diverse, including multi-agent RL, offline RL, and Model-based RL.
  • 2021
    June. 28
    Three students graduated from MIG in June, 2021, and Guangxiang Zhu has won the award for Excellent Doctoral Dissertation of Tsinghua University, 2021
    Guangxiang Zhu, Terry Liu, and Tonghan Wang graduated from MIG in June, 2021. Guangxiang, as the first student to obtain a PhD's degree in MIG, has won the award for Excellent Doctoral Dissertation of Tsinghua University, 2021, creating a milestone for MIG. Besides, Terry and Tonghan are the first two students to obtain a Master of Computer Sciences in MIG.
  • 2021
    May. 8
    Two papers from MIG were accepted in ICML 2021.
    Two papers from MIG, MetaCURE: Meta Reinforcement Learning with Empowerment-Driven Exploration , and Generalizable Episodic Memory for Deep Reinforcement Learning were accepted at ICML 2021, There were 5513 submissions to ICML this year, of which the program committee accepted 1184 for presentation at the conference, including 166 long presentations and 1018 short presentations.
  • 2021
    April. 30
    Paper from MIG was accepted in IJCAI 2021.
    The paper from MIG collaborated with NetEase, Reward-Constrained Behavior Cloning , was accepted for presentation at IJCAI-21 (the 30th International Joint Conference on Artificial Intelligence). Out of the 4204 full-paper submissions, 587 papers are finally accepted, at a 13.9% acceptance rate (19.3% out of the 3033 papers which passed the summary-reject review phase and received full reviews).
  • 2021
    Jan. 14
    Four papers from MIG were accepted in ICLR 2021.
    Four papers from MIG were accepted by International Conference on earning Representations (ICLR) 2021. Tonghan Wang, Jianhao Wang and Beining Han focused Multi-Agent Reinforcement Learning (MARL). Our group's paper: RODE: Learning Roles to Decompose Multi-Agent Tasks. and Off-Policy Multi-Agent Decomposed Policy Gradients. are accepted. Another accepted paper QPLEX: Duplex Dueling Multi-Agent Q-Learning encodes the IGM principle into the neural network architecture and thus enables efficient value function learning. Siyuan Li and Lulu Zheng focused on hierarchical Reinforcement Learning and used slow features to define subgoals in their paper Learning Subgoal Representations with Slow Dynamics , which was also accepted in ICLR 2021.
  • 2020
    Oct. 22
    Tonghan Wang was awarded with National Scholarship (top 1%) at Tsinghua University, 2020.
    Tonghan Wang, a master candidate from MIG group, was awarded with National Scholarship (top 1%) at Tsinghua University, due to his impressing scientific research achievements. National Scholarship is the most important award in Tsinghua University and only 4 graduate students can obtain this scholarship in Institute for Interdisciplinary Information Sciences (IIIS) every year. Besides, Tonghan is the third student in MIG group to win National Scholarship.
  • 2020
    Sep. 15
    Paper: "Bridging Imagination and Reality for Model-Based Deep Reinforcement Learning" from MIG was accepted in NeurIPS 2020
    Sample efficiency has been one of the major challenges for deep reinforcement learning. Recently, model-based reinforcement learning has been proposed to address this challenge by performing planning on imaginary trajectories with a learned world model. However, world model learning may suffer from overfitting to training trajectories, and thus model-based value estimation and policy search will be prone to be sucked in an inferior local policy. In this paper, we propose a novel model-based reinforcement learning algorithm, called BrIdging Reality and Dream (BIRD). It maximizes the mutual information between imaginary and real trajectories so that the policy improvement learned from imaginary trajectories can be easily generalized to real trajectories. We demonstrate that our approach improves sample efficiency of model-based planning, and achieves state-of-the-art performance on challenging visual control benchmarks.
  • 2020
    June 15
    Paper: "ROMA: Multi-Agent Reinforcement Learning with Emergent Roles" from MIG was accepted in ICML 2020
    ROMA learns sub-task specialization in cooperative agent teams by proposing a role-oriented learning framework. In this framework, roles are emergent, and agents with similar roles tend to share their learning to be specialized on certain sub-tasks. ROMA can learn specialized, dynamic, versatile, and identifiable roles, and push forward the state of the art on the StarCraft II micromanagement benchmark.
  • 2020
    Jan.9
    Three papers from MIG were accepted in ICLR 2020
    Three papers from MIG were accepted by International Conference on earning Representations (ICLR) 2020. Tonghan Wang and Jianhao Wang focused on exploration and communication problems in Multi-Agent Reinforcement Learning (MARL). Both of their papers: Influence-Based Multi-Agent Exploration and Learning Nearly Decomposable Value-Functions via Communication Minimization were accepted. Influence-Based Multi-Agent Exploration was accepted as spotlight. Guangxiang Zhu focused on combining episodic control with reinforcement learning and his paper Episodic Reinforcement Learning with Associative Memory was also accepted.
  • 2019
    Nov.11
    Paper: "Object-Oriented Dynamics Learning through Multi-Level Abstraction" from MIG was accepted in AAAI Conference on Artificial Intelligence (AAAI), 2020
    This paper presents a novel self-supervised object-oriented framework to enable efficient object-based dynamics learning and planning, which employs a three-level learning architecture from motion detection, to dynamic instance segmentation, and to dynamics learning. This framework can learn model from few interactions with environments and enable an agent to directly plan in unseen environments without retraining. In addition, it learns semantically and visually interpretable representations.
  • 2019
    Oct.15
    GuangXiang Zhu and Siyuan Li both were awarded with National Scholarship (top 1%) at Tsinghua University
    GuangXiang Zhu and Siyuan Li, two PhD candidates from MIG group, were both awarded with National Scholarship (top 1%) at Tsinghua University, for their impressing achievements in scientific research. National Scholarship is the most important award in Tsinghua University and only 4 graduate students can obtain this scholarship in Institute for Interdisciplinary Information Sciences (IIIS) every year.
  • 2019
    Sep.4
    Paper: "Hierarchical Reinforcement Learning with Advantage-Based Auxiliary Rewards" was accepted by Conference and Workshop on Neural Information Processing Systems (NeurIPS), 2019
    This paper aims to adapt low-level skills to downstream tasks while maintaining the generality of reward design. Siyuan Li and Rui Wang proposed an HRL framework which sets auxiliary rewards for low-level skill training based on the advantage function of the high-level policy. This auxiliary reward enables efficient, simultaneous learning of the high-level policy and low-level skills without using task-specific knowledge. In addition, they also theoretically prove that optimizing low-level skills with this auxiliary reward will increase the task return for the joint policy.