Aydoğan, M., & Kocaman, V. (2022). TRSAv1: A new benchmark dataset for classifying user reviews on Turkish e-commerce websites. Journal of Information Science, 49(6),1711-1725. https://doi.org/10.1177/01655515221074328
Burkhardt, F., Paeschke, A., Rolfes, M., Sendlmeier, W. F., & Weiss, B. (2005). A database of German emotional speech. Interspeech 2005,1517-1520. https://doi.org/10.21437/interspeech.2005-446
Busso, C., Bulut, M., Lee, C., Kazemzadeh, A., Mower, E., Kim, S., Lee, S., Narayanan, A., & Narayanan, S. S. (2008). IEMOCAP: Interactive emotional dyadic motion capture database. Language Resources and Evaluation, 42(4), 335-359. https://doi.org/10.1007/s10579-008-9076-6
Cao, H., Cooper, D. G., Keutmann, M. K., Gur, R. C., Nenkov, R., & Gur, R. E. (2014). CREMA-D: Crowd-sourced emotional multimodal actors dataset. IEEE Transactions on Affective Computing, 5(4), 377-390. https://doi.org/10.1109/TAFFC.2014.2336244
Chollet, F. (2017, July 21-26). Xception: Deep learning with depthwise separable convolutions [Paper]. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 1251-1258, Honolulu, HI, USA. IEEE. https://doi.org/10.1109/CVPR.2017.195
Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273-297. https://doi.org/10.1007/BF00994018
Demirtaş, S. C., & Hakdağlı, Ö. (2022, Kasım, 24-26). Dönüştürücü-CNN modeli ile Türkçe konuşma verisi üzerinde duygu tanıma [Bildiri]. ELECO 2022 Elektrik - Elektronik ve Bilgisayar Mühendisliği Sempozyumu, Bursa, Türkiye. IEEE.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database [Paper]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 248-255, Miami, FL, USA. IEEE. https://doi.org/10.1109/CVPR.2009.5206848
Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2019, June 2-7). BERT: Pre-training of deep bidirectional transformers for language understanding [Paper]. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA. Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1423
Dupuis, K., & Pichora-Fuller, M. K. (2010). Toronto emotional speech set (TESS) [Dataset] https://doi.org/10.5683/SP2/E8H2MF
Ekman, P., & Friesen, W. V. (1978). Facial action coding system: a technique for the measurement of facial movement. Consulting Psychologists Press. https://doi.org/10.1037/t27734-000
Goodfellow, I., Erhan, D., Carrier, P. L., Courville, A., Mirza, M., Hamner, B., Cukierski, W., Tang, Y., Thaler, D., Lee, D.-H., Zhou, Y., Ramaiah, C., Feng, F., Li, R., Wang, X., Athanasakis, D., Shawe-Taylor, J., Milakov, M., Park, J., & Bengio, Y. (2013). Challenges in representation learning: A report on three machine learning contests. In Proceedings of the Neural Information Processing Systems (NIPS) Workshop.
Huang, G. B., Ramesh, M., Berg, T., & Learned-Miller, E. (2007). Labeled Faces in the Wild: A database for studying face recognition in unconstrained environments [Paper]. Technical Report 07-49, University of Massachusetts, Amherst.
Jackson, P. J. B., & Haq, S.(2014). Surrey audio-visual expressed emotion(SAVEE) database, University of Surrey.
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., & Soricut, R. (2020). ALBERT: A lite BERT for self-supervised learning of language representations [Paper]. International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia. https://doi.org/10.48550/arXiv.1909.11942
Livingstone, S. R., & Russo, F. A. (2018). The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLOS ONE, 13(5), e0196391. https://doi.org/10.1371/journal.pone.0196391
Lotfian, R., & Busso, C. (2019, April). Curriculum learning for speech emotion recognition from crowdsourced labels [Paper]. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 27(4), 815-826. https://doi.org/10.1109/TASLP.2019.2898816
Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., & Matthews, I. (2010). The Extended Cohn-Kanade Dataset (CK+) [Paper]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), San Francisco, CA, USA. https://doi.org/10.1109/CVPRW.2010.5543262
Lyons, M., Akamatsu, S., Kamachi, M., & Gyoba, J. (1998). Coding facial expressions with Gabor wavelets [Paper]. In Proceedings of the Third IEEE International Conference on Automatic Face and Gesture Recognition, 200-205, Nara, Japan. IEEE. https://doi.org/10.1109/AFGR.1998.670949
Pang, B., & Lee, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2), 1-135. https://doi.org/10.1561/1500000011
Poria, S., Cambria, E., Hazarika, D., & Mazumder, N. (2017, November 18-21). Multi-level multiple attentions for contextual multimodal sentiment analysis [Paper]. Proceedings of the IEEE International Conference on Data Mining (ICDM), New Orleans, LA, USA. IEEE. https://doi.org/10.1109/ICDM.2017.104
Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition [Paper]. In International Conference on Learning Representations (ICLR), San Diego, CA, USA. https://doi.org/10.48550/arXiv.1409.1556
Sun, C., Huang, L., & Qiu, X. (2021, November 7-11). Utilizing BERT and ALBERT models for sentiment analysis on social media text [Paper]. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), Punta Cana, Dominican Republic. Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1035
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., & Rabinovich, A. (2015). Going deeper with convolutions [Paper]. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1-9, Boston, MA, USA. IEEE. https://doi.org/10.1109/CVPR.2015.7298594
Xu, H., Liu, B., Shu, L., & Yu, P. S. (2020, June 2-7). BERT post-training for review reading comprehension and aspect-based sentiment analysis [Paper]. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Minneapolis, MN, USA. Association for Computational Linguistics. https://doi.org/10.18653/v1/N19-1242
Zadeh, A., Chen, M., Poria, S., Cambria, E., & Morency, L.-P. (2018, January-February). Multimodal sentiment intensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 34(1), 82-88. IEEE. https://doi.org/10.1109/MIS.2018.2888673
|