TY - STD TI - Kuehne H, Jhuang H, Garrote E, Poggio T, Serre T (2011) HMDB: a large video database for human motion recognition. Paper presented at 2011 IEEE international conference on computer vision, IEEE, Barcelona, pp. 2556–2563 https://doi.org/10.1109/ICCV.2011.6126543 ID - ref1 ER - TY - STD TI - Somro K, Zamir AR, Shah M (2012) UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402, 2012 ID - ref2 ER - TY - STD TI - Kay W, Carreira J, Simonyan K, Zhang B, Hillier C, Vijayanarasimhan S, et al (2017) The kinetics human action video dataset. arXiv preprint arXiv:1705.06950, 2017 ID - ref3 ER - TY - STD TI - Goyal R, Kahou SE, Michalski V, Materzynska J, Westphal S, Kim H et al (2017) The “something something” video database for learning and evaluating visual common sense. Paper presented at 2017 IEEE international conference on computer vision, IEEE, Venice, pp. 5843–5851 https://doi.org/10.1109/ICCV.2017.622 ID - ref4 ER - TY - JOUR AU - Simonyan, K. AU - Zisserman, A. PY - 2014 DA - 2014// TI - Two-stream convolutional networks for action recognition in videos JO - Adv Neural Inf Proces Syst VL - 2014 ID - Simonyan2014 ER - TY - STD TI - Feichtenhofer C, Pinz A, Zisserman A (2016) Convolutional two-stream network fusion for video action recognition. Paper presented at the 29th IEEE conference on computer vision and pattern recognition, IEEE, Las Vegas, pp. 1933–1941 https://doi.org/10.1109/CVPR.2016.213 ID - ref6 ER - TY - CHAP AU - Wang, L. M. AU - Xiong, Y. J. AU - Wang, Z. AU - Qiao, Y. AU - Lin, D. H. AU - Tang, X. O. ED - Leibe, B. ED - Matas, J. ED - Sebe, N. ED - Welling, M. PY - 2016 DA - 2016// TI - Temporal segment networks: towards good practices for deep action recognition BT - Computer vision – ECCV 2016. Paper presented at the 14th European conference on computer vision ECCV, lecture notes in computer science PB - Springer CY - Cham UR - https://doi.org/10.1007/978-3-319-46484-8_2 DO - 10.1007/978-3-319-46484-8_2 ID - Wang2016 ER - TY - BOOK AU - Tran, D. AU - Bourdev, L. AU - Fergus, R. AU - Torresani, L. AU - Paluri, M. PY - 2015 DA - 2015// TI - Learning spatiotemporal features with 3D convolutional networks UR - https://doi.org/10.1109/ICCV.2015.510 DO - 10.1109/ICCV.2015.510 ID - Tran2015 ER - TY - BOOK AU - Carreira, J. AU - Zisserman, A. PY - 2017 DA - 2017// TI - Quo vadis, action recognition? A new model and the kinetics dataset UR - https://doi.org/10.1109/CVPR.2017.502 DO - 10.1109/CVPR.2017.502 ID - Carreira2017 ER - TY - BOOK AU - Qiao, Z. F. AU - Yao, T. AU - Mei, T. PY - 2017 DA - 2017// TI - Learning Spatio-temporal representation with pseudo-3D residual networks UR - https://doi.org/10.1109/ICCV.2017.590 DO - 10.1109/ICCV.2017.590 ID - Qiao2017 ER - TY - BOOK AU - Tran, D. AU - Wang, H. AU - Torresani, L. AU - Ray, J. AU - LeCun, Y. AU - Paluri, M. PY - 2018 DA - 2018// TI - A closer look at Spatio-temporal convolutions for action recognition UR - https://doi.org/10.1109/CVPR.2018.00675 DO - 10.1109/CVPR.2018.00675 ID - Tran2018 ER - TY - CHAP AU - Zolfaghari, M. AU - Singh, K. ED - Ferrari, V. ED - Hebert, M. ED - Sminchisescu, C. ED - Weiss, Y. PY - 2018 DA - 2018// TI - Brox T (2018) ECO: efficient convolutional network for online video understanding BT - Proceedings of 15th European conference on computer vision PB - Springer CY - Cham UR - https://doi.org/10.1007/978-3-030-01216-8_43 DO - 10.1007/978-3-030-01216-8_43 ID - Zolfaghari2018 ER - TY - BOOK AU - Lin, J. AU - Gan, C. AU - Hang, S. PY - 2019 DA - 2019// TI - TSM: temporal shift module for efficient video understanding ID - Lin2019 ER - TY - BOOK AU - He, K. M. AU - Zhang, X. Y. AU - Ren, S. Q. AU - Sun, J. PY - 2016 DA - 2016// TI - Deep residual learning for image recognition UR - https://doi.org/10.1109/CVPR.2016.90 DO - 10.1109/CVPR.2016.90 ID - He2016 ER - TY - STD TI - Tran D, Ray J, Shou Z, Chang SF, Paluri M (2017) Convnet architecture search for spatiotemporal feature learning. arXiv preprint arXiv:1708.05038, 2017 ID - ref15 ER - TY - BOOK AU - Hu, J. AU - Shen, L. AU - Sun, G. PY - 2018 DA - 2018// TI - Squeeze-and-excitation networks UR - https://doi.org/10.1109/CVPR.2018.00745 DO - 10.1109/CVPR.2018.00745 ID - Hu2018 ER - TY - BOOK AU - Wang, H. AU - Schmid, C. PY - 2013 DA - 2013// TI - Action recognition with improved trajectories UR - https://doi.org/10.1109/ICCV.2013.441 DO - 10.1109/ICCV.2013.441 ID - Wang2013 ER - TY - BOOK AU - Lan, Z. Z. AU - Lin, M. AU - Li, X. C. AU - Al, G. AU - Raj, B. PY - 2015 DA - 2015// TI - Beyond Gaussian pyramid: multi-skip feature stacking for action recognition ID - Lan2015 ER - TY - BOOK AU - Hara, K. AU - Kataoka, H. AU - Satoh, Y. PY - 2018 DA - 2018// TI - can spatiotemporal 3D CNNs retrace the history of 2D CNNs and ImageNet? UR - https://doi.org/10.1109/CVPR.2018.00685 DO - 10.1109/CVPR.2018.00685 ID - Hara2018 ER - TY - BOOK AU - Zhu, J. G. AU - Zou, W. AU - Zhu, Z. PY - 2018 DA - 2018// TI - End-to-end video-level representation learning for action recognition UR - https://doi.org/10.1109/ICPR.2018.8545710 DO - 10.1109/ICPR.2018.8545710 ID - Zhu2018 ER - TY - JOUR AU - Wu, C. h. u. n. l. e. i. AU - Cao, H. a. i. w. e. n. AU - Zhang, W. e. i. s. h. a. n. AU - Wang, L. e. i. q. u. a. n. AU - Wei, Y. i. w. e. i. AU - Peng, Z. e. x. i. n. PY - 2019 DA - 2019// TI - Refined Spatial Network for Human Action Recognition JO - IEEE Access VL - 7 UR - https://doi.org/10.1109/ACCESS.2019.2933303 DO - 10.1109/ACCESS.2019.2933303 ID - Wu2019 ER - TY - BOOK AU - Yuan, Y. AU - Wang, D. AU - Wang, Q. PY - 2019 DA - 2019// TI - Memory-augmented temporal dynamic learning for action recognition UR - https://doi.org/10.1609/aaai.v33i01.33019167 DO - 10.1609/aaai.v33i01.33019167 ID - Yuan2019 ER - TY - BOOK AU - Zhou, B. L. AU - Andonian, A. AU - Oliva, A. AU - Torralba, A. PY - 2018 DA - 2018// TI - Temporal relational reasoning in videos UR - https://doi.org/10.1007/978-3-030-01246-5_49 DO - 10.1007/978-3-030-01246-5_49 ID - Zhou2018 ER - TY - STD TI - Shi P (2018) Research of speech emotion recognition based on deep neural network. Dissertation, Wuhan University of Technology ID - ref24 ER -