2511002289
  • Open Access
  • Article
LTVGN: Mastering Predictions of Information Transmissibility in Time-Varying Information Networks
  • Xinrui Shi,   
  • Yupeng Li *

Received: 13 Sep 2025 | Revised: 09 Oct 2025 | Accepted: 14 Nov 2025 | Published: 04 Jan 2026

Abstract

In the era of information overload, various types of information interconnect to form complex networks. To better manage diffusion paths within networks, we propose predicting information transmissibility—the probability of information being transmitted under the influence of other information in the network. Accurate transmissibility prediction has practical applications in recommendation systems and misinformation control, enabling relevant information to reach appropriate audiences while curbing the spread of less useful content. Given the characteristics of information networks, text-attributed graphs provide a natural representation that captures both network structure and content semantics. However, existing text-attributed graph representation methods fail to capture diffusion dynamics and incur high computational costs. Therefore, we propose a novel efficient textual-graph model, Language Temporal Variation Graph Network(LTVGN), to predict transmissibility by capturing time-varying features, structural information and textual information. Our proposed model is evaluated on the citation dataset HEP-TH. The results demonstrate that our model outperforms state-of-the-art models, achieving a low estimation error.

References 

  • 1.

    Chen, W.; Yuan, Y.; Zhang, L. Scalable Influence Maximization in Social Networks under the Linear Threshold Model. In Proceedings of the 2010 IEEE International Conference on Data Mining, Sydney, NSW, Australia, 13–17 December 2010; pp. 88–97.

  • 2.

    Shi, X.; Li, Y. TVGN: Mastering Predictions of Information Transmissibility in Time-Varying Networks. In Proceedings of the International Conference on Computational Data and Social Networks, Bangkok, Thailand, 16–18 December 2024; pp. 148–160.

  • 3.

    Xia, W.; Li, Y.; Wu, J.; et al. Deepis: Susceptibility Estimation on Social Networks. In Proceedings of the 14th ACM International Conference on Web Search and Data Mining, New York, NY, USA, 8–12 March 2021; pp. 761–769.

  • 4.

    Shi, Y.; Zhou, J.; Zhang, C. DySuse: Susceptibility estimation in dynamic social networks. Expert Syst. Appl. 2023, 234, 121042.

  • 5.

    Ferrara, E.; Yang, Z. Quantifying the effect of sentiment on information diffusion in social media. PeerJ Comput. Sci. 2015, 1, e26.

  • 6.

    Berry, C. The Diffusion of Information: The Impact of Sentiment and Topic on Retweets. In Proceedings of the 2020 IEEE International Conference on Big Data (Big Data), Atlanta, GA, USA, 10–13 December 2020; pp. 5636–5638.

  • 7.

    Dˇzanko, L.; Suitner, C.; Erseghe, T.; et al. Linguistic features influencing information diffusion in social networks: A systematic review. Comput. Hum. Behav. Rep. 2025, e18, 100626.

  • 8.

    Jin, B.; Liu, G.; Han, C.; et al. Large Language Models on Graphs: A Comprehensive Survey. In IEEE Transactions on Knowledge and Data Engineering; IEEE: New York, NY, USA, 2024.

  • 9.

    Hu, E.J.; Shen, Y.; Wallis, P.; et al. Lora: Low-rank adaptation of large language models. ICLR 2022, 1, 3.

  • 10.

    Stieglitz, S.; Dang-Xuan, L. Emotions and information diffusion in social media—sentiment of microblogs and sharing behavior. J. Manag. Inf. Syst. 2013, 29, 217–248.

  • 11.

    Wang, X.; Lee, E.W. Negative emotions shape the diffusion of cancer tweets: toward an integrated social network–text analytics approach. Internet Res. 2021, 31, 401–418.

  • 12.

    Zhao, Y.; Wang, C.; Han, H.; et al. Unfolding the Mixed and Intertwined: A Multilevel View of Topic Evolution on Twitter. In Proceedings of the International Conference on Advanced Data Mining and Applications. Dalian, China, 21–23 November 2019; pp. 359–369.

  • 13.

    Duan, K.; Liu, Q.; Chua, T.S.; et al. Simteg: A frustratingly simple approach improves textual graph learning. arXiv 2023, arXiv:2308.02565.

  • 14.

    Lu, F.; Zhang, W.; Shao, L.; et al. Scalable influence maximization under independent cascade model. J. Netw. Comput. Appl. 2017, 86, 15–23.

  • 15.

    Zhou, C.; Zhang, P.; Guo, J.; et al. Ublf: An upper bound based approach to discover influential nodes in social networks. In Proceedings of the 2013 IEEE 13th International Conference on Data Mining, Dallas, TX, USA, 7–10 December 2013; pp. 907–916.

  • 16.

    Li, Z.; Peng, B.; He, P.; et al. Guiding large language models via directional stimulus prompting. Adv. Neural Inf. Process. Syst. 2023, 36, 62630–62656.

  • 17.

    He, K.; Zhang, X.; Ren, S.; et al. Delving Deep into Rectifiers: Surpassing Human-Level Performance on Imagenet Classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1026–1034.

  • 18.

    Vaswani, A.; Shazeer, N.; Parmar, N.; et al. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30.

  • 19.

    Leskovec, J.; Kleinberg, J.; Faloutsos, C. Graphs Over Time: Densification Laws, Shrinking Diameters and Possible Explanations. In Proceedings of the Eleventh ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, Chicago, IL, USA, 21–24 August 2005; pp. 177–187.

  • 20.

    Kipf, T.N.; Welling, M. Semi-supervised classification with graph convolutional networks. arXiv 2016, arXiv:1609.02907.

  • 21.

    Hamilton, W.; Ying, Z.; Leskovec, J. Inductive representation learning on large graphs. arXiv 2017, arXiv:1706.02216.

  • 22.

    Velickovic, P.; Cucurull, G.; Casanova, A.; et al. Graph attention networks. arXiv 2017, arXiv:1710.10903.

  • 23.

    Wu, F.; Souza, A.; Zhang, T.; et al. Simplifying Graph Convolutional Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019; pp. 6861–6871.

Share this article:
How to Cite
Shi, X.; Li, Y. LTVGN: Mastering Predictions of Information Transmissibility in Time-Varying Information Networks. Transactions on Artificial Intelligence 2026, 2 (1), 1–14. https://doi.org/10.53941/tai.2026.100001.
RIS
BibTex
Copyright & License
article copyright Image
Copyright (c) 2026 by the authors.