[1] HACHAJ T, PIEKARCZYK M, OGIELA M R. Supporting Rehabilitation Process with Novel Motion Capture Analysis Method[C]// 2018 2nd European Conference on Electrical Engineering and Computer Science (EECS). 2018.
[2] G. JIE, C. GUILING, H. LIN AND Z. DONG. Application Research on Motion Capture System Data Reuse in Virtual Reality Environment[C]// 2010 Second International Conference on Intelligent Human-Machine Systems and Cybernetics, Nanjing, China, 2010, pp. 114-116.
[3] NOIUMKAR S, TIRAKOAT S. Use of Optical Motion Capture in Sports Science: A Case Study of Golf Swing[C]// International Conference on Informatics & Creative Multimedia. IEEE, 2014.
[1]
[4] GUERRA-FILHO G. Optical Motion Capture: Theory and Implementation[J]. RITA, 2005, 12(2): 61-90.
[5] KOK M, HOL J D, SCHÖN T B. An optimization-based approach to human body motion capture using inertial sensors[J]. IFAC Proceedings Volumes, 2014, 47(3): 79-85.
[6] KIM Y, BAEK S, BAE B C. Motion capture of the human body using multiple depth sensors[J]. Etri Journal, 2017, 39(2): 181-190.
[7] CAO Z, SIMON T, WEI S E, et al. Realtime multi-person 2d pose estimation using part affinity fields[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7291-7299.
[8] CHEN X, YUILLE A L. Articulated pose estimation by a graphical model with image dependent pairwise relations[J]. Advances in neural information processing systems, 2014, 27.
[9] NEWELL A, YANG K, DENG J. Stacked hourglass networks for human pose estimation[C]//European conference on computer vision. Springer, Cham, 2016: 483-499.
[10] TOMPSON J J, JAIN A, LECUN Y, et al. Joint training of a convolutional network and a graphical model for human pose estimation[J]. Advances in neural information processing systems, 2014, 27.
[11] TOSHEV A, SZEGEDY C. Deeppose: Human pose estimation via deep neural networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2014: 1653-1660.
[12] WEI S E, RAMAKRISHNA V, KANADE T, et al. Convolutional pose machines[C]//Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2016: 4724-4732.
[13] PAVLAKOS G, ZHOU X, DERPANIS K G, et al. Coarse-to-fine volumetric prediction for single-image 3D human pose[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 7025-7034.
[14] PAVLAKOS G, ZHU L, ZHOU X, et al. Learning to estimate 3D human pose and shape from a single color image[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 459-468.
[15] TOME D, RUSSELL C, AGAPITO L. Lifting from the deep: Convolutional 3d pose estimation from a single image[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017: 2500-2509.
[16] ZHOU X, ZHU M, LEONARDOS S, et al. Sparseness meets deepness: 3d human pose estimation from monocular video[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 4966-4975.
[17] MEHTA D, SOTNYCHENKO O, MUELLER F, et al. Single-shot multi-person 3d pose estimation from monocular rgb[C]//2018 International Conference on 3D Vision (3DV). IEEE, 2018: 120-130.
[18] MEHTA D, SRIDHAR S, SOTNYCHENKO O, et al. Vnect: Real-time 3d human pose estimation with a single rgb camera[J]. ACM Transactions on Graphics (TOG), 2017, 36(4): 1-14.
[19] OMRAN M, LASSNER C, PONS-MOLL G, et al. Neural body fitting: Unifying deep learning and model based human pose and shape estimation[C]//2018 international conference on 3D vision (3DV). IEEE, 2018: 484-494.
[20] COLYER S L, EVANS M, COSKER D P, et al. A review of the evolution of vision-based motion analysis and the integration of advanced computer vision methods towards developing a markerless system[J]. Sports medicine-open, 2018, 4(1): 1-15.
[21] PAVLLO D, FEICHTENHOFER C, GRANGIER D, et al. 3d human pose estimation in video with temporal convolutions and semi-supervised training[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 7753-7762.
[22] CAI Y, GE L, LIU J, et al. Exploiting spatial-temporal relationships for 3d pose estimation via graph convolutional networks[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 2272-2281.
[23] QIU H, WANG C, WANG J, et al. Cross view fusion for 3d human pose estimation[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019: 4342-4351.
[24] ROETENBERG D, LUINGE H, SLYCKE P. Xsens MVN: Full 6DOF human motion tracking using miniature inertial sensors[J]. Xsens Motion Technologies BV, Tech. Rep, 2009, 1.
[25] MARKUS M, BERTRAM T, GABRIELE B. On Inertial Body Tracking in the Presence of Model Calibration Errors[J]. Sensors, 2016, 16(7):1132.
[26] BOUVIER B, DUPREY S, CLAUDON L, et al. Upper Limb Kinematics Using Inertial and Magnetic Sensors: Comparison of Sensor-to-Segment Calibrations[J]. Sensors, 2015.
[27] TAETZ B, BLESER G, MIEZAL M. Towards Self-Calibrating Inertial Body Motion Capture[J]. 2016.
[28] MÜLLER P, BÉGIN M A, SCHAUER T, et al. Alignment-free, self-calibrating elbow angles measurement using inertial sensors[J]. IEEE journal of biomedical and health informatics, 2016, 21(2): 312-319.
[29] MIEZAL M, TAETZ B, BLESER G. Real-time inertial lower body kinematics and ground contact estimation at anatomical foot points for agile human locomotion[C]//2017 IEEE international conference on robotics and automation (ICRA). IEEE, 2017: 3256-3263.
[30] KOK M, HOL J D, SCHÖN T B. An optimization-based approach to human body motion capture using inertial sensors[J]. IFAC Proceedings Volumes, 2014, 47(3): 79-85.
[31] ZHANG Z, WONG W, WU J. Ubiquitous Human Upper-Limb Motion Estimation using Wearable Sensors[J]. IEEE Transactions on Information Technology in Biomedicine, 2011, 15(4):513-521.
[32] WONG W. Wearable sensors for 3D upper limb motion modeling and ubiquitous estimation[J]. Control Theory and Technology, 2011(01):10-17.
[33] 明星, 胡玉枝. 诺亦腾:让非洲传说走向全球[J]. 中关村, 2019(8).
[34] 崔丽君,黄天羽,冯枫,张杰,杨凯,刘陈.基于少量惯性传感器的姿态重建与仿真[J].系统仿真学报,2017,29(10):2261-2267.
[35] VLASIC D, ADELSBERGER R, VANNUCCI G, et al. Practical motion capture in everyday surroundings[J]. ACM transactions on graphics (TOG), 2007, 26(3): 35-es.
[36] TROJE N F. Decomposing biological motion: A framework for analysis and synthesis of human gait patterns[J]. Journal of vision, 2002, 2(5): 2-2.
[37] SANGER T D. Human arm movements described by a low-dimensional superposition of principal components[J]. Journal of Neuroscience, 2000, 20(3): 1066-1072.
[38] SAFONOVA A, HODGINS J K, POLLARD N S. Synthesizing physically realistic human motion in low-dimensional, behavior-specific spaces[J]. ACM Transactions on Graphics (ToG), 2004, 23(3): 514-521.
[39] PONS-MOLL G, BAAK A, GALL J, et al. Outdoor human motion capture using inverse kinematics and von mises-fisher sampling[C]//2011 International Conference on Computer Vision. IEEE, 2011: 1243-1250.
[40] VON MARCARD T, PONS-MOLL G, ROSENHAHN B. Human pose estimation from video and imus[J]. IEEE transactions on pattern analysis and machine intelligence, 2016, 38(8): 1533-1547.
[41] ANDREWS S , HUERTA I , KOMURA T, et al. Real-time Physics-based Motion Capture with Sparse Sensors[C]// European Conference on Visual Media Production. ACM, 2016:1-10.
[42] TAUTGES J , ARNOZINKE, KRUGER B, et al. Motion Reconstruction Using Sparse Accelerometer Data[J]. Acm Transactions on Graphics, 2011, 30(3):1-12.
[43] VON MARCARD T, ROSENHAHN B, BLACK M J, et al. Sparse inertial poser: Automatic 3d human pose estimation from sparse imus[C]//Computer Graphics Forum. 2017, 36(2): 349-360.
[44] OPER M, MAHMOOD N, ROMERO J, et al. SMPL: A skinned multi-person linear model[J]. ACM transactions on graphics (TOG), 2015, 34(6): 1-16.
[45] HUANG Y, KAUFMANN M, AKSAN E, et al. Deep inertial poser: Learning to reconstruct human pose from sparse inertial measurements in real time[J]. ACM Transactions on Graphics (TOG), 2018, 37(6): 1-15.
[46] YI X, ZHOU Y, XU F. TransPose: real-time 3D human translation and pose estimation with six inertial sensors[J]. ACM Transactions on Graphics (TOG), 2021, 40(4): 1-13.
[47] YUHANG Y, YINGLIANG L, YAO M, et al. Training data selection for short term load forecasting[C]//2011 Third International Conference on Measuring Technology and Mechatronics Automation. IEEE, 2011, 3: 1040-1043.
[48] BALCÁZAR J, DAI Y, WATANABE O. A random sampling technique for training support vector machines[C]//International Conference on Algorithmic Learning Theory. Springer, Berlin, Heidelberg, 2001: 119-134.
[49] FERRAGUT E M, LASKA J. Randomized sampling for large data applications of SVM[C]//2012 11th International Conference on Machine Learning and Applications. IEEE, 2012, 1: 350-355.
[50] TSAGALIDIS E, EVANGELIDIS G. The effect of training set selection in meteorological data mining[C]//2010 14th Panhellenic Conference on Informatics. IEEE, 2010: 61-65.
[51] LEE Y J, MANGASARIAN O L. RSVM: Reduced support vector machines[C]//Proceedings of the 2001 SIAM International Conference on Data Mining. Society for Industrial and Applied Mathematics, 2001: 1-17.
[52] LEE Y J, HUANG S Y. Reduced support vector machines: A statistical theory[J]. IEEE Transactions on neural networks, 2007, 18(1): 1-13.
[53] LI X, CERVANTES J, YU W. Fast classification for large data sets via random selection clustering and support vector machines[J]. Intelligent Data Analysis, 2012, 16(6): 897-914.
[54] ZHAI J H, XUI H Y, ZHANG S F, et al. Instance selection based on supervised clustering[C]//2012 International Conference on Machine Learning and Cybernetics. IEEE, 2012, 1: 112-117.
[55] KOSKIMAKI H, JUUTILAINEN I, LAURINEN P, et al. Two-level clustering approach to training data instance selection: a case study for the steel industry[C]//2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). IEEE, 2008: 3044-3049.
[56] KANEDA Y, ZHAO Q, LIU Y. On-Line training with guide data: Shall we select the guide data randomly or based on cluster centers?[C]//2016 IEEE Symposium Series on Computational Intelligence (SSCI). IEEE, 2016: 1-7.
[57] ZHENG J, YANG W, LI X. Training data reduction in deep neural networks with partial mutual information based feature selection and correlation matching based active learning[C]//2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017: 2362-2366.
[58] LIU S H, CHU F H, LIN S H, et al. Training data selection for improving discriminative training of acoustic models[C]//2007 IEEE Workshop on Automatic Speech Recognition & Understanding (ASRU). IEEE, 2007: 284-289.
[59] MOORE R C, LEWIS W. Intelligent selection of language model training data[C]//Proceedings of the ACL 2010 conference short papers. 2010: 220-224.
[60] COLE C A, THRASHER J F, STRAYER S M, et al. Resolving ambiguities in accelerometer data due to location of sensor on wrist in application to detection of smoking gesture[C]//2017 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI). IEEE, 2017: 489-492.
[61] ORHA I, ONIGA S. Study regarding the optimal sensors placement on the body for human activity recognition[C]//2014 IEEE 20th International Symposium for Design and Technology in Electronic Packaging (SIITME). IEEE, 2014: 203-206.
[62] DOBRUCALI O, BARSHAN B. Sensor-activity relevance in human activity recognition with wearable motion sensors and mutual information criterion[M]//Information Sciences and Systems 2013. Springer, Cham, 2013: 285-294.
[63] CHING Y T, CHENG C C, HE G W, et al. Full model for sensors placement and activities recognition[C]//Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers. 2017: 17-20.
[64] MURAO K, MOGARI H, TERADA T, et al. Evaluation function of sensor position for activity recognition considering wearability[C]//Proceedings of the 2013 ACM conference on Pervasive and ubiquitous computing adjunct publication. 2013: 623-632.
[65] BANOS O, TOTH M A, DAMAS M, et al. Dealing with the effects of sensor displacement in wearable activity recognition[J]. Sensors, 2014, 14(6): 9995-10023.
[66] KUNZE K, LUKOWICZ P. Sensor placement variations in wearable activity recognition[J]. IEEE Pervasive Computing, 2014, 13(4): 32-41.
[67] SCHUSTER M, PALIWAL K K. Bidirectional recurrent neural networks[J]. IEEE transactions on Signal Processing, 1997, 45(11): 2673-2681.
[68] DING C, PENG H. Minimum redundancy feature selection from microarray gene expression data[J]. Journal of bioinformatics and computational biology, 2005, 3(02): 185-205.
[69] RESHEF D N, RESHEF Y A, FINUCANE H K, et al. Detecting novel associations in large data sets[J]. science, 2011, 334(6062): 1518-1524.
[70] MALHI A, GAO R X. PCA-based feature selection scheme for machine defect classification[J]. IEEE transactions on instrumentation and measurement, 2004, 53(6): 1517-1525.
[71] Mixamo. 2020. Available online: https://www.mixamo.com/#/?page=1&type= Character (accessed on 9 May 2021)
[72] TROJE N F. Decomposing biological motion: A framework for analysis and synthesis of human gait patterns[J]. Journal of vision, 2002, 2(5): 2-2.
[73] MÜLLER M, RÖDER T, CLAUSEN M, et al. Documentation mocap database hdm05[J]. 2007.
[74] DE LA TORRE F, HODGINS J, BARGTEIL A, et al. Guide to the carnegie mellon university multimodal activity (cmu-mmac) database[J]. 2009.
[75] IONESCU C, PAPAVA D, OLARU V, et al. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments[J]. IEEE transactions on pattern analysis and machine intelligence, 2013, 36(7): 1325-1339.
[76] MANDERY C, TERLEMEZ Ö, DO M, et al. The KIT whole-body human motion database[C]//2015 International Conference on Advanced Robotics (ICAR). IEEE, 2015: 329-336.
[77] AKHTER I, BLACK M J. Pose-conditioned joint angle limits for 3D human pose reconstruction[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1446-1455.
[78] TRUMBLE M, GILBERT A, MALLESON C, et al. Total capture: 3d human pose estimation fusing video and inertial sensors[C]//Proceedings of 28th British Machine Vision Conference. University of Surrey, 2017: 1-13.
[79] MAHMOOD N, GHORBANI N, TROJE N F, et al. AMASS: Archive of motion capture as surface shapes[C]//Proceedings of the IEEE/CVF international conference on computer vision. 2019: 5442-5451.
Edit Comment