Cognitive Stimulation Dialogue Generation Based on Logical Reasoning and Multi-task Integration

JIANG Yuru, LI Mengyuan, TAO Yuyang, OU Keming, SHE Zepeng, SHI Shuicai

PDF(2872 KB)
PDF(2872 KB)
Journal of Shanxi University(Natural Science Edition) ›› 2025, Vol. 48 ›› Issue (3) : 516-526. DOI: 10.13451/j.sxu.ns.2024158
Information Sciences

Cognitive Stimulation Dialogue Generation Based on Logical Reasoning and Multi-task Integration

Author information +
History +

Abstract

In the context of global aging, the health problems of the elderly have gradually become prominent, and cognitive stimulation dialogue is an important means to maintain the cognitive health of the elderly. Previous researchers constructed a Chinese cognitive stimulation dialogue dataset (CSConv) that includes emotional support, thereby initiating research in the field of Chinese cognitive stimulation dialogue. However, the authors did not fully model the logical reasoning relationships within cognitive stimulation dialogues and did not effectively utilize the guiding role of strategy labels during dialogue generation. This study regards cognitive stimulation dialogue generation as a multi-task integration thinking and reasoning process, and models the logical relationship among emotion classification tasks, decision-making tasks and dialogue response generation tasks as a reasoning process to guide the generation of large language models. For decision-making tasks, this paper proposes a decision-making model with a hierarchical encoder structure. The results of the decision-making experiment show that the decision-making model improves the accuracy of the cognitive stimulation therapy principles and emotional support strategies decision-making tasks by 3.96% and 2.1%, respectively. For multi-task logical thinking and reasoning process, this paper proposes a multi-task integration method to combine the models corresponding to the three tasks. The experimental results show that the multi-task integration method has improved BLEU-4 by 7.95% compared with the previous baseline, indicating that the dialogue response ability has been improved, proving the effectiveness and advancement of this method.

Key words

cognitively stimulating / emotional support / decision-making tasks / multi-task integration method

Cite this article

Download Citations
JIANG Yuru , LI Mengyuan , TAO Yuyang , et al . Cognitive Stimulation Dialogue Generation Based on Logical Reasoning and Multi-task Integration. Journal of Shanxi University(Natural Science Edition). 2025, 48(3): 516-526 https://doi.org/10.13451/j.sxu.ns.2024158

References

1
DE OLIVEIRA T C, SOARES F C, DE MACEDO L D, et al. Beneficial Effects of Multisensory and Cognitive Stimulation on Age-related Cognitive Decline in Long-term-care Institutions[J]. Clin Interv Aging, 2014, 9: 309-320. DOI: 10.2147/cia.s54383 .
2
PARK J M, KIM M W, SHIM H Y. Effects of a Multicomponent Cognitive Stimulation Program on Cognitive Function Improvement Among Elderly Women[J]. Asian Nurs Res (Korean Soc Nurs Sci), 2019, 13(5): 306-312. DOI: 10.1016/j.anr.2019.11.001 .
3
TOKUNAGA S, SEABORN K, TAMURA K, et al. Cognitive Training for Older Adults with a Dialogue-based, Robot-facilitated Storytelling System[C]//International Conference on Interactive Digital Storytelling. Little Cottonwood Canyon, UT, USA: Springer, Cham, 2019: 405-409. DOI: 10.1007/978-3-030-33894-7_43 .
4
TOKUNAGA S, TAMURA K, OTAKE-MATSUURA M. A Dialogue-based System with Photo and Storytelling for Older Adults: Toward Daily Cognitive Training[J]. Front Robot AI, 2021, 8: 644964. DOI: 10.3389/frobt.2021.644964 .
5
SPECTOR A, THORGRIMSEN L, WOODS B, et al. Efficacy of an Evidence-based Cognitive Stimulation Therapy Programme for People with Dementia: Randomised Controlled Trial[J]. Br J Psychiatry, 2003, 183: 248-254. DOI: 10.1192/bjp.183.3.248 .
6
艾米·斯佩克特, 琳恩·索格林森, 鲍勃·伍兹, 等. 认知刺激治疗Cst:为认知障碍症设计的循证小组活动(导师手册)[M]. 黄凯茵, 译. 香港: 香港大学出版社, 2017.
7
NAVARRO J, DOCTOR F, ZAMUDIO V, et al. Fuzzy Adaptive Cognitive Stimulation Therapy Generation for Alzheimer's Sufferers: Towards a Pervasive Dementia Care Monitoring Platform[J]. Future Gener Comput Syst, 2018, 88: 479-490. DOI: 10.1016/j.future.2018.06.018 .
8
JIANG J Y, WANG S, LI Q T, et al. A Cognitive Stimulation Dialogue System with Multi-source Knowledge Fusion for Elders with Cognitive Impairment[C]//Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2023: 10628-10640. DOI: 10.18653/v1/2023.acl-long.593 .
9
LIU S, ZHENG C, DEMASI O, et al. Towards Emotional Support Dialog Systems[C]//Proceedings of the 59th Annual Meeting of the ACL and the 11th IJCNLP (Volume 1: Long Papers). Pennsylvania, USA: ACL Anthology, 2021: 3469-3483. DOI: 10.18653/v1/2021.acl-long.269 .
10
RYU H, KIM S, KIM D, et al. Simple and Steady Interactions Win the Healthy Mentality[J]. Proc ACM Hum-Comput Interact, 2020, 4(CSCW2): 1-25. DOI: 10.1145/3415223 .
11
VALTOLINA S, HU L. Charlie: a Chatbot to Improve the Elderly Quality of Life and to Make them more Active to Fight Their Sense of Loneliness[C]//CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter. New York: ACM, 2021: 10.1145/3464385.3464726. DOI: 10.1145/3464385.3464726 .
12
LEE M H, ACKERMANS S, VAN AS N, et al. Caring for Vincent: a Chatbot for Self-compassion[C]//Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. New York: ACM, 2019: 10.1145/3290605.3300932. DOI: 10.1145/3290605.3300932 .
13
SHARMA A, LIN I W, MINER A S, et al. Towards Facilitating Empathic Conversations in Online Mental Health Support: A Reinforcement Learning Approach[C]//Proceedings of the Web Conference 2021. New York: ACM, 2021: 194-205. DOI: 10.1145/3442381.3450097 .
14
SHARMA A, MINER A, ATKINS D, et al. A Computational Approach to Understanding Empathy Expressed in Text-based Mental Health Support[C]//Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020: 5263-5276. DOI: 10.18653/v1/2020.emnlp-main.425 .
15
RASHKIN H, SMITH E M, LI M, et al. Towards Empathetic Open-domain Conversation Models: A New Benchmark and Dataset[C]//Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 5370-5381. DOI: 10.18653/v1/p19-1534 .
16
KIM J, SHIN E, HAN K, et al. Efficacy of Smart Speaker-based Metamemory Training in Older Adults: Case-control Cohort Study[J]. J Med Internet Res, 2021, 23(2): e20177. DOI: 10.2196/20177 .
17
DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding[C]//Proceedings of the 2019 Conference of the NAACL: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, USA: ACL Anthology, 2019: 4171-4186. DOI: 10.18653/v1/N19-1423 .
18
SHI X J, CHEN Z R, WANG H, et al. Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting[C]//NIPS'15: Proceedings of the 29th International Conference on Neural Information Processing Systems. Montreal, Canada: ML Research Press, 2015: 802-810.
19
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is All You Need[J]. Adv Neural Inf Process Syst, 2017: 6000-6010. DOI: 10.5555/3295222.3295349 .
20
REIMERS N, GUREVYCH I. Sentence-BERT: Sentence Embeddings Using Siamese BERT-networks[C]//Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 3982-3992. DOI: 10.18653/v1/d19-1410 .
21
DU Z X, QIAN Y J, LIU X, et al. GLM: General Language Model Pretraining with Autoregressive Blank Infilling[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2022: 320-335. DOI: 10.18653/v1/2022.acl-long.26 .
22
PAPINENI K, ROUKOS S, WARD T, et al. BLEU: a Method for Automatic Evaluation of Machine Translation[C]//Proceedings of the 40th Annual Meeting on Association for Computational Linguistics-ACL '02. Morristown, NJ, USA: Association for Computational Linguistics, 2001: 311-318. DOI: 10.3115/1073083.1073135 .
24
ZHANG T Y, KISHORE V, WU F, et al. BERTScore: Evaluating Text Generation with BERT[EB/OL]. (2020-02-24)[2024-03-28].
25
WEI J, WANG X Z, SCHUURMANS D, et al. Chain of Thought Prompting Elicits Reasoning in Large Language Models[EB/OL]//NIPS'22: 36th International Conference on Neural Information Processing Systems. New Orleans LA USA:Curran Associates Inc. 2022, 35: 24824-24837. DOI:10.5555/3600270.3602070 .
26
LIU P F, YUAN W Z, FU J L, et al. Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing[J]. ACM Comput Surv, 2023, 55(9): 1-35. DOI: 10.1145/3560815 .
27
WANG Y, KE P, ZHENG Y, et al. A Large-scale Chinese Short-text Conversation Dataset[C]//Natural Language Processing and Chinese Computing: 9th CCF International Conference, NLPCC 2020. Zhengzhou, China, 2020, Proceedings, Part I 9: Springer, 2020: 91-103. DOI: 10.1007/978-3-030-60450-9_8 .
28
XU L, ZHANG X W, DONG Q Q. CLUECorpus2020: a Large-scale Chinese Corpus for Pre-training Language Model[EB/OL]. (2020-03-05)[2024-03-28].
29
RADFORD A, WU J, CHILD R, et al. Language Models are Unsupervised Multitask Learners[J]. OpenAI blog, 2019, 1(8): 9.

Footnotes

1 https://huggingface.co/uer/sbert-base-chinese-nli

Comments

PDF(2872 KB)

Accesses

Citation

Detail

Sections
Recommended

/