[1] MORDATCH I, TODOROV E, POPOVIĆ Z. Discovery of complex behaviors through contact invariant optimization[J]. ACM Transactions on Graphics (ToG), 2012, 31(4): 1-8.
[2] MORDATCH I, WANG J M, TODOROV E, et al. Animating human lower limbs using contact invariant optimization[J]. ACM Transactions on Graphics (TOG), 2013, 32(6): 1-8.
[3] ZHENG Y, YUMANE K. Human motion tracking control with strict contact force constraintsfor floating-base humanoid robots[M]. Google Patents, 2015.
[4] ABE Y, DA SILVA M, POPOVIĆ J. Multiobjective control with frictional contacts[C]//Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation.2007: 249-258.
[5] HOLDEN D, KANOUN O, PEREPICHKA M, et al. Learned motion matching[J]. ACM Transactions on Graphics (TOG), 2020, 39(4): 53-1.
[6] HOLDEN D, KOMURA T, SAITO J. Phase-functioned neural networks for character control[J]. ACM Transactions on Graphics (TOG), 2017, 36(4): 1-13.
[7] STARKE S, ZHANG H, KOMURA T, et al. Neural state machine for character-scene interactions.[J]. ACM Trans. Graph., 2019, 38(6): 209-1.
[8] STARKE S, ZHAO Y, KOMURA T, et al. Local motion phases for learning multi-contactcharacter movements[J]. ACM Transactions on Graphics (TOG), 2020, 39(4): 54-1.
[9] REMPE D, GUIBAS L J, HERTZMANN A, et al. Contact and human dynamics from monocular video[C]//Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part V 16. Springer, 2020: 71-87.
[10] SHIMADA S, GOLYANIK V, XU W, et al. Physcap: Physically plausible monocular 3d motion capture in real time[J]. ACM Transactions on Graphics (ToG), 2020, 39(6): 1-16.
[11] REMPE D, BIRDAL T, HERTZMANN A, et al. Humor: 3d human motion model for robust pose estimation[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2021: 11488-11499.
[12] MARUYAMA T, TADA M, SAWATOME A, et al. Constraint-based real-time full-body motioncapture using inertial measurement units[C]//2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE, 2018: 4298-4303.
[13] YI X, ZHOU Y, XU F. Transpose: Real-time 3d human translation and pose estimation with six inertial sensors[J]. ACM Transactions on Graphics (TOG), 2021, 40(4): 1-13.
[14] YI X, ZHOU Y, HABERMANN M, et al. Physical inertial poser (pip): Physics-aware realtime human motion tracking from sparse inertial sensors[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 13167-13178.
[15] BRUBAKER M A, SIGAL L, FLEET D J. Estimating contact dynamics[C]//2009 IEEE 12th International Conference on Computer Vision. IEEE, 2009: 2389-2396.
[16] 徐銘聲, HSU M S, 林奕成, 等. 角色动画中脚步滑动之即时侦测与校正[Z]. 2008.
[17] IKEMOTO L, ARIKAN O, FORSYTH D. Knowing when to put your foot down[C]//Proceedings of the 2006 symposium on Interactive 3D graphics and games. 2006: 49-53.
[18] MA H, YAN W, YANG Z, et al. Real-time foot-ground contact detection for inertial motioncapture based on an adaptive weighted naive bayes model[J]. IEEE Access, 2019, 7: 130312-130326.
[19] MILLARD M, MOMBAUR K. A quick turn of foot: Rigid foot-ground contact models for human motion prediction[J]. Frontiers in neurorobotics, 2019, 13: 62.
[20] BROWN P. Contact modelling for forward dynamics of human motion[D]. University of Waterloo, 2017.
[21] XIE K, WANG T, IQBAL U, et al. Physics-based human motion estimation and synthesis from videos[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021:11532-11541.
[22] ZIEGLER J, KRETZSCHMAR H, STACHNISS C, et al. Accurate human motion capture in large areas by combining IMU-and laser-based people tracking[C]//2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2011: 86-91.
[23] MADARAS M, RIEČICKỲ A, MESÁROŠ M, et al. Position Estimation and Calibration of Inertial Motion Capture Systems Using Single Camera[J]. Journal of Virtual Reality and Broadcasting, 2019, 15(3).
[24] RIECICKỲ A, MADARAS M, PIOVARCI M, et al. Optical-inertial Synchronization of MoCap Suit with Single Camera Setup for Reliable Position Tracking.[C]//VISIGRAPP (1: GRAPP). 2018: 40-47.
[25] ZHU L, XU C, SHI K, et al. Recovering Walking Trajectories from Local Measurements and Inertia Data[J]. Mathematical Problems in Engineering, 2020, 2020: 1-11.
[26] KAICHI T, MARUYAMA T, TADA M, et al. Resolving position ambiguity of imu-based human pose with a single rgb camera[J]. Sensors, 2020, 20(19): 5453.
[27] SCHREINER P, PEREPICHKA M, LEWIS H, et al. Global position prediction for interactive motion capture[J]. Proceedings of the ACM on Computer Graphics and Interactive Techniques, 2021, 4(3): 1-16.
[28] MAHMOOD N, GHORBANI N, TROJE N F, et al. AMASS: Archive of Motion Capture as Surface Shapes[J/OL]. CoRR, 2019, abs/1904.03278. http://arxiv.org/abs/1904.03278.
[29] MAEDA T, UKITA N. MotionAug: Augmentation with Physical Correction for Human Motion Prediction[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 6427-6436.
[30] YAO L, YANG W, HUANG W. A data augmentation method for human action recognition using dense joint motion images[J]. Applied Soft Computing, 2020, 97: 106713.
[31] ZHANG J, WU F, WEI B, et al. Data augmentation and dense-LSTM for human activity recognition using WiFi signal[J]. IEEE Internet of Things Journal, 2020, 8(6): 4628-4641.
[32] MAEDA T, UKITA N. Data Augmentation for Human Motion Prediction[C]//2021 17th International Conference on Machine Vision and Applications (MVA). IEEE, 2021: 1-5.
[33] AGRAWAL R, JOSHI A, BETKE M. Enabling early gesture recognition by motion augmentation[C]//Proceedings of the 11th PErvasive Technologies Related to Assistive EnvironmentsConference. 2018: 98-101.
[34] STEVEN EYOBU O, HAN D S. Feature representation and data augmentation for human activity classification based on wearable IMU sensor data using a deep LSTM neural network[J]. Sensors, 2018, 18(9): 2892.
[35] ROGEZ G, SCHMID C. Mocap-guided data augmentation for 3d pose estimation in the wild[J]. Advances in neural information processing systems, 2016, 29.
[36] YIN K, LOKEN K, VAN DE PANNE M. Simbicon: Simple biped locomotion control[J]. ACM Transactions on Graphics (TOG), 2007, 26(3): 105-es.
[37] LIU L, YIN K, VAN DE PANNE M, et al. Sampling-based contact-rich motion control[M]//ACM SIGGRAPH 2010 papers. 2010: 1-10.
[38] MONDAL A K, JAMALI N. A survey of reinforcement learning techniques: strategies, recent development, and future directions[A]. 2020.
[39] PENG X B, ABBEEL P, LEVINE S, et al. Deepmimic: Example-guided deep reinforcement learning of physics-based character skills[J]. ACM Transactions On Graphics (TOG), 2018, 37(4): 1-14.
[40] PENG X B, MA Z, ABBEEL P, et al. Amp: Adversarial motion priors for stylized physics-based character control[J]. ACM Transactions on Graphics (TOG), 2021, 40(4): 1-20.
[41] SCHUSTER M, PALIWAL K K. Bidirectional recurrent neural networks[J]. IEEE transactions on Signal Processing, 1997, 45(11): 2673-2681.
[42] LI Z, HE D, TIAN F, et al. Towards binary-valued gates for robust lstm training[C]//International Conference on Machine Learning. PMLR, 2018: 2995-3004.
[43] MICHELUCCI U. An introduction to autoencoders[A]. 2022.
[44] KINGMA D P, WELLING M. Auto-encoding variational bayes[A]. 2013.
[45] SOHN K, LEE H, YAN X. Learning structured output representation using deep conditional generative models[J]. Advances in neural information processing systems, 2015, 28.
[46] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.
[47] JADERBERG M, SIMONYAN K, ZISSERMAN A, et al. Spatial transformer networks[J].Advances in neural information processing systems, 2015, 28.
[48] ZHENG C, ZHU S, MENDIETA M, et al. 3d human pose estimation with spatial and temporal transformers[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 11656-11665.
[49] SONG Z, WANG D, JIANG N, et al. Actformer: A gan transformer framework towards general action-conditioned 3d human motion generation[A]. 2022.
[50] PETROVICH M, BLACK M J, VAROL G. Action-conditioned 3D human motion synthesis with transformer VAE[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 10985-10995.
[51] MALEK-PODJASKI M, DELIGIANNI F. Adversarial Attention for Human Motion Synthesis[A]. 2022.
[52] PAN S J, YANG Q. A survey on transfer learning[J]. IEEE Transactions on knowledge and data engineering, 2010, 22(10): 1345-1359.
Edit Comment