1 |
MAURIELLO M L, LINCOLN T, HON G, et al. SAD: A stress annotated dataset for recognizing everyday stressors in SMS-like conversational systems [C]// Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. 2021: 399.
|
2 |
GARG M, SAXENA C, SAHA S, et al. CAMS: An annotated corpus for causal analysis of mental health issues in social media posts [C]// Proceedings of the Thirteenth Language Resources and Evaluation Conference. 2022: 6387-6396.
|
3 |
SAXENA C, GARG M, ANSARI G. Explainable causal analysis of mental health on social media data [C]// International Conference on Neural Information Processing. 2022: 172-183.
|
4 |
YANG K, JI S, ZHANG T, et al. Towards interpretable mental health analysis with large language models [C]// Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023: 6056-6077.
|
5 |
YANG K, ZHANG T, KUANG Z, et al. MentaLLaMA: Interpretable mental health analysis on social media with large language models [EB/OL]. (2023-09-24)[2024-01-06]. https://arxiv.org/abs/2309.13567.
|
6 |
ZHOU Y, MURESANU A I, HAN Z, et al. Large language models are human-level prompt engineers [EB/OL]. (2023-03-10)[2024-01-06]. https://arxiv.org/abs/2211.01910.
|
7 |
XU N, QIAO C, GENG X, et al.. Instance-dependent partial label learning. Advances in Neural Information Processing Systems, 2021, 34, 27119- 27130.
|
8 |
RIBEIRO M T, SINGH S, GUESTRIN C. “Why should I trust you?”: Explaining the predictions of any classifier [C]// Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2016: 1135-1144.
|
9 |
ZHAO W X, ZHOU K, LI J, et al. A survey of large language models [EB/OL]. (2023-11-24)[2024-01-06]. https://arxiv.org/abs/2303.18223.
|
10 |
TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: Open and efficient foundation language models [EB/OL]. (2023-02-27)[2024-01-06]. https://arxiv.org/abs/2302.13971.
|
11 |
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J]. Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017: 6000-6010.
|
12 |
DEVLIN J, CHANG M W, LEE K, et al. BERt: Pre-training of deep bidirectional transformers for language understanding [EB/OL]. (2019-05-24)[2024-01-06]. https://arxiv.org/abs/1810.04805.
|
13 |
LIU P, YUAN W, FU J, et al.. Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 2023, 55 (9): 195- 35.
|
14 |
LU Y, BARTOLO M, MOORE A, et al. Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity [C]// Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. 2022: 8086-8098.
|
15 |
HOFFMANN J, BORGEAUD S, MENSCH A, et al.. An empirical analysis of compute-optimal large language model training. Advances in Neural Information Processing Systems, 2022, 35, 30016- 30030.
|
16 |
BROWN T, MANN B, RYDER N, et al.. Language models are few-shot learners. Advances in Neural Information Processing Systems, 2020, 33, 1877- 1901.
|
17 |
CHUNG H W, HOU L, LONGPRE S, et al. Scaling instruction-finetuned language models [EB/OL]. (2022-12-06)[2024-01-06]. https://arxiv.org/abs/2210.11416.
|
18 |
WEI J, WANG X, SCHUURMANS D, et al.. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 2022, 35, 24824- 24837.
|
19 |
BLEI D M, KUCUKELBIR A, MCAULIFFE J D.. Variational inference: A review for statisticians. Journal of the American Statistical Association, 2017, 112 (518): 859- 877.
|
20 |
HOFFMAN M, BLEI D M, WANG C, et al.. Stochastic variational inference. Journal of Machine Learning Research, 2013, 14, 1303- 1347.
|
21 |
RANGANATH R, GERRISH S, BLEI D. Black box variational inference [C]// Proceedings of the 17th International Conference on Articial Intelligence and Statistics. 2014: 814-822.
|
22 |
OPPER M, SAAD D. Advanced Mean Field Methods: Theory and Practice [M]. Cambridge: Massachusetts Institute of Technology Press, 2001.
|
23 |
PRYZANT R, ITER D, LI J, et al. Automatic prompt optimization with “gradient descent” and beam search [C]// Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. 2023: 7957-7968.
|
24 |
ZHU X. Semi-supervised Learning with Graphs [M]. Pennsylvania, USA: Carnegie Mellon University Press, 2005.
|
25 |
KIPF T N, WELLING M. Variational graph auto-encoders [EB/OL]. (2016-11-21)[2024-01-06]. https://arxiv.org/abs/1611.07308.
|
26 |
KIPF T N, WELLING M. Semi-supervised classification with graph convolutional networks [EB/OL]. (2017-02-22)[2024-01-06]. https://arxiv.org/abs/1609.02907.
|
27 |
FIGURNOV M, MOHAMED S, MNIH A. Implicit reparameterization gradients [C]// Proceedings of the 32nd International Conference on Neural Information Processing Systems. 2018: 439-450.
|
28 |
POPESCU M C, BALAS V E, PERESCU-POPESCU L, et al.. Multilayer perceptron and neural networks. WSEAS Transactions on Circuits and Systems, 2009, 8 (7): 579- 588.
|