Acquiring Diverse Skills using Curriculum Reinforcement Learning with Mixture of Experts.
|
ICML |
2024 |
0 |
Beyond ELBOs: A Large-Scale Evaluation of Variational Methods for Sampling.
|
ICML |
2024 |
0 |
Lens Capsule Tearing in Cataract Surgery using Reinforcement Learning.
|
ICRA |
2024 |
0 |
Open the Black Box: Step-based Policy Updates for Temporally-Correlated Episodic Reinforcement Learning.
|
ICLR |
2024 |
0 |
Towards Diverse Behaviors: A Benchmark for Imitation Learning with Human Demonstrations.
|
ICLR |
2024 |
0 |
Neural Contractive Dynamical Systems.
|
ICLR |
2024 |
0 |
Movement Primitive Diffusion: Learning Gentle Robotic Manipulation of Deformable Objects.
|
IEEE Robotics and Automation Letters |
2024 |
0 |
Robust Black-Box Optimization for Stochastic Search and Episodic Reinforcement Learning.
|
JMLR |
2024 |
0 |
Curriculum-Based Imitation of Versatile Skills.
|
ICRA |
2023 |
0 |
SA6D: Self-Adaptive Few-Shot 6D Pose Estimator for Novel and Occluded Objects.
|
CoRL |
2023 |
0 |
Swarm Reinforcement Learning for Adaptive Mesh Refinement.
|
NIPS/NeurIPS |
2023 |
0 |
Information Maximizing Curriculum: A Curriculum-Based Approach for Learning Versatile Skills.
|
NIPS/NeurIPS |
2023 |
0 |
Beyond Deep Ensembles: A Large-Scale Evaluation of Bayesian Deep Learning under Distribution Shift.
|
NIPS/NeurIPS |
2023 |
0 |
Multi Time Scale World Models.
|
NIPS/NeurIPS |
2023 |
0 |
Adversarial Imitation Learning with Preferences.
|
ICLR |
2023 |
0 |
Grounding Graph Network Simulators using Physical Sensor Observations.
|
ICLR |
2023 |
0 |
Accurate Bayesian Meta-Learning by Accurate Task Posterior Inference.
|
ICLR |
2023 |
0 |
ProDMP: A Unified Perspective on Dynamic and Probabilistic Movement Primitives.
|
IEEE Robotics and Automation Letters |
2023 |
0 |
SyMFM6D: Symmetry-Aware Multi-Directional Fusion for Multi-View 6D Object Pose Estimation.
|
IEEE Robotics and Automation Letters |
2023 |
0 |
Reactive motion generation on learned Riemannian manifolds.
|
IJRR |
2023 |
0 |
LapGym - An Open Source Framework for Reinforcement Learning in Robot-Assisted Laparoscopic Surgery.
|
JMLR |
2023 |
0 |
Hierarchical Policy Learning for Mechanical Search.
|
ICRA |
2022 |
1 |
Push-to-See: Learning Non-Prehensile Manipulation to Enhance Instance Segmentation via Deep Q-Learning.
|
ICRA |
2022 |
1 |
MV6D: Multi-View 6D Pose Estimation on RGB-D Frames Using a Deep Point-wise Voting Network.
|
IROS |
2022 |
1 |
End-to-End Learning of Hybrid Inverse Dynamics Models for Precise and Compliant Impedance Control.
|
RSS |
2022 |
0 |
FusionVAE: A Deep Hierarchical Variational Autoencoder for RGB Image Fusion.
|
ECCV |
2022 |
0 |
What Matters For Meta-Learning Vision Regression Tasks?
|
CVPR |
2022 |
5 |
Robot Policy Learning from Demonstration Using Advantage Weighting and Early Termination.
|
IROS |
2022 |
0 |
Hidden Parameter Recurrent State Space Models For Changing Dynamics Scenarios.
|
ICLR |
2022 |
0 |
Inferring Versatile Behavior from Demonstrations by Matching Geometric Descriptors.
|
CoRL |
2022 |
0 |
Deep Black-Box Reinforcement Learning with Movement Primitives.
|
CoRL |
2022 |
0 |
Differentiable Trust Region Layers for Deep Reinforcement Learning.
|
ICLR |
2021 |
9 |
Learning Riemannian Manifolds for Geodesic Motion Skills.
|
RSS |
2021 |
10 |
Residual Feedback Learning for Contact-Rich Manipulation Tasks with Uncertainty.
|
IROS |
2021 |
4 |
Bayesian Context Aggregation for Neural Processes.
|
ICLR |
2021 |
14 |
Specializing Versatile Skill Libraries using Local Mixture of Experts.
|
CoRL |
2021 |
17 |
Navigate-and-Seek: A Robotics Framework for People Localization in Agricultural Environments.
|
IEEE Robotics and Automation Letters |
2021 |
2 |
Cooperative Assistance in Robotic Surgery through Multi-Agent Reinforcement Learning.
|
IROS |
2021 |
3 |
Next-Best-Sense: A Multi-Criteria Robotic Exploration Strategy for RFID Tags Discovery.
|
IEEE Robotics and Automation Letters |
2020 |
3 |
Probabilistic Approach to Physical Object Disentangling.
|
IEEE Robotics and Automation Letters |
2020 |
3 |
Action-Conditional Recurrent Kalman Networks For Forward and Inverse Dynamics Learning.
|
CoRL |
2020 |
4 |
Expected Information Maximization: Using the I-Projection for Mixture Density Estimation.
|
ICLR |
2020 |
9 |
Enhancing Grasp Pose Computation in Gripper Workspace Spheres.
|
ICRA |
2020 |
0 |
Trust-Region Variational Inference with Gaussian Mixture Models.
|
JMLR |
2020 |
0 |
Improving Local Trajectory Optimisation using Probabilistic Movement Primitives.
|
IROS |
2019 |
26 |
Learning Kalman Network: A deep monocular visual odometry for on-road driving.
|
Robotics and Autonomous Systems |
2019 |
12 |
Compatible natural gradient policy search.
|
MLJ |
2019 |
0 |
Learning Replanning Policies With Direct Policy Search.
|
IEEE Robotics and Automation Letters |
2019 |
4 |
Projections for Approximate Policy Iteration Algorithms.
|
ICML |
2019 |
8 |
Grasping Unknown Objects Based on Gripper Workspace Spheres.
|
IROS |
2019 |
5 |
Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces.
|
ICML |
2019 |
53 |
The kernel Kalman rule - Efficient nonparametric inference by recursive least-squares and subspace projections.
|
MLJ |
2019 |
0 |
Deep Reinforcement Learning for Swarm Systems.
|
JMLR |
2019 |
0 |
Using probabilistic movement primitives in robotics.
|
Autonomous Robots |
2018 |
138 |
Energy-Efficient Design and Control of a Vibro-Driven Robot.
|
IROS |
2018 |
19 |
Regularizing Reinforcement Learning with State Abstraction.
|
IROS |
2018 |
11 |
Contact Detection and Size Estimation Using a Modular Soft Gripper with Embedded Flex Sensors.
|
IROS |
2018 |
5 |
Efficient Gradient-Free Variational Inference using Policy Search.
|
ICML |
2018 |
29 |
Learning Robust Policies for Object Manipulation with Robot Swarms.
|
ICRA |
2018 |
15 |
Sample and Feedback Efficient Hierarchical Reinforcement Learning from Human Preferences.
|
ICRA |
2018 |
19 |
Learning Coupled Forward-Inverse Models with Combined Prediction Errors.
|
ICRA |
2018 |
2 |
Model-Free Trajectory-based Policy Optimization with Monotonic Improvement.
|
JMLR |
2018 |
0 |
Layered direct policy search for learning hierarchical skills.
|
ICRA |
2017 |
17 |
A Survey of Preference-Based Reinforcement Learning Methods.
|
JMLR |
2017 |
0 |
Probabilistic Prioritization of Movement Primitives.
|
IEEE Robotics and Automation Letters |
2017 |
21 |
Hybrid control trajectory optimization under uncertainty.
|
IROS |
2017 |
13 |
Non-parametric Policy Search with Limited Information Loss.
|
JMLR |
2017 |
26 |
Learning movement primitive libraries through probabilistic segmentation.
|
IJRR |
2017 |
49 |
Model-based contextual policy search for data-efficient generalization of robot skills.
|
Artificial Intelligence |
2017 |
72 |
A learning-based shared control architecture for interactive task execution.
|
ICRA |
2017 |
40 |
Contextual Covariance Matrix Adaptation Evolutionary Strategies.
|
IJCAI |
2017 |
7 |
Local Bayesian Optimization of Motor Skills.
|
ICML |
2017 |
16 |
Empowered skills.
|
ICRA |
2017 |
4 |
The Kernel Kalman Rule - Efficient Nonparametric Inference with Recursive Least Squares.
|
AAAI |
2017 |
12 |
Guiding Trajectory Optimization by Demonstrated Distributions.
|
IEEE Robotics and Automation Letters |
2017 |
39 |
Learning to Assemble Objects with a Robot Swarm.
|
AAMAS |
2017 |
4 |
State-Regularized Policy Search for Linearized Dynamical Systems.
|
ICAPS |
2017 |
6 |
Policy Search with High-Dimensional Context Variables.
|
AAAI |
2017 |
0 |
Phase estimation for fast action recognition and trajectory generation in human-robot collaboration.
|
IJRR |
2017 |
0 |
Probabilistic movement primitives for coordination of multiple human-robot collaborative tasks.
|
Autonomous Robots |
2017 |
0 |
Demonstration based trajectory optimization for generalizable robot motions.
|
Humanoids |
2016 |
38 |
Probabilistic inference for determining options in reinforcement learning.
|
MLJ |
2016 |
0 |
Movement primitives with multiple phase parameters.
|
ICRA |
2016 |
6 |
Non-parametric contextual stochastic search.
|
IROS |
2016 |
3 |
Using probabilistic movement primitives for striking movements.
|
Humanoids |
2016 |
19 |
Learning soft task priorities for control of redundant robots.
|
ICRA |
2016 |
35 |
Model-Free Preference-Based Reinforcement Learning.
|
AAAI |
2016 |
72 |
Hierarchical Relative Entropy Policy Search.
|
JMLR |
2016 |
0 |
Model-Free Trajectory Optimization for Reinforcement Learning.
|
ICML |
2016 |
44 |
Optimal control and inverse optimal control by distribution matching.
|
IROS |
2016 |
4 |
Catching heuristics are optimal control policies.
|
NIPS/NeurIPS |
2016 |
29 |
Learning multiple collaborative tasks with a mixture of Interaction Primitives.
|
ICRA |
2015 |
99 |
Learning of Non-Parametric Control Policies with High-Dimensional State Features.
|
AISTATS |
2015 |
34 |
Optimizing robot striking movement primitives with Iterative Learning Control.
|
Humanoids |
2015 |
4 |
Towards learning hierarchical skills for multi-phase manipulation tasks.
|
ICRA |
2015 |
109 |
Learning robot in-hand manipulation with tactile features.
|
Humanoids |
2015 |
137 |
A Probabilistic Framework for Semi-autonomous Robots Based on Interaction Primitives with Phase Estimation.
|
ISRR |
2015 |
15 |
Regularized covariance estimation for weighted maximum likelihood policy search methods.
|
Humanoids |
2015 |
17 |
Extracting low-dimensional control variables for movement primitives.
|
ICRA |
2015 |
44 |
Model-free Probabilistic Movement Primitives for physical interaction.
|
IROS |
2015 |
17 |
Probabilistic segmentation applied to an assembly task.
|
Humanoids |
2015 |
38 |
Learning motor skills from partially observed movements executed at different speeds.
|
IROS |
2015 |
25 |
Policy Evaluation with Temporal Differences: A Survey and Comparison (Extended Abstract).
|
ICAPS |
2015 |
0 |
Model-Based Relative Entropy Stochastic Search.
|
NIPS/NeurIPS |
2015 |
0 |
Interaction primitives for human-robot cooperation tasks.
|
ICRA |
2014 |
199 |
Learning interaction for collaborative tasks with probabilistic movement primitives.
|
Humanoids |
2014 |
88 |
Latent space policy search for robotics.
|
IROS |
2014 |
25 |
Dimensionality reduction for probabilistic movement primitives.
|
Humanoids |
2014 |
33 |
Robust policy updates for stochastic optimal control.
|
Humanoids |
2014 |
8 |
Policy Search for Path Integral Control.
|
ECML/PKDD |
2014 |
41 |
Learning to predict phases of manipulation tasks as hidden states.
|
ICRA |
2014 |
45 |
Sample-based informationl-theoretic stochastic optimal control.
|
ICRA |
2014 |
23 |
Policy evaluation with temporal differences: a survey and comparison.
|
JMLR |
2014 |
0 |
Towards Robot Skill Learning: From Simple Skills to Table Tennis.
|
ECML/PKDD |
2013 |
39 |
Probabilistic Movement Primitives.
|
NIPS/NeurIPS |
2013 |
444 |
Learning sequential motor tasks.
|
ICRA |
2013 |
49 |
A probabilistic approach to robot trajectory generation.
|
Humanoids |
2013 |
20 |
Autonomous reinforcement learning with hierarchical REPS.
|
IJCNN |
2013 |
5 |
Data-Efficient Generalization of Robot Skills with Contextual Policy Search.
|
AAAI |
2013 |
124 |
Generalization of human grasping for multi-fingered robot hands.
|
IROS |
2012 |
98 |
Learning concurrent motor skills in versatile solution spaces.
|
IROS |
2012 |
45 |
Hierarchical Relative Entropy Policy Search.
|
AISTATS |
2012 |
0 |
Variational Inference for Policy Search in changing situations.
|
ICML |
2011 |
101 |
Learning complex motions by sequencing simpler motion templates.
|
ICML |
2009 |
47 |
Fitted Q-iteration by Advantage Weighted Regression.
|
NIPS/NeurIPS |
2008 |
58 |
Biologically inspired kinematic synergies provide a new paradigm for balance control of humanoid robots.
|
Humanoids |
2007 |
31 |
Efficient Continuous-Time Reinforcement Learning with Adaptive State Graphs.
|
ECML/PKDD |
2007 |
19 |