Variant Influence Maximization: Approximation Algorithm and Deep Solution
In recent two decades, online social platforms have become more and more popular, and the dissemination of information on social networks has attracted wide attention of the industries and academia. The users of these social media platforms and the relationships between them can be characterized as a social network. A large number of works have been focused on the diffusion phenomenon on social networks, including diffusion of ideas, news, adoptions of new products, etc. Influence maximization problem is one of the well-studied topics, which seeks for a small subset of nodes as seeds to maximize the expected number of influenced users under some diffusion model. However, there are some impractical assumptions of this problem, such as the uniform cost of activating nodes as seeds and profit obtained from influenced users. In this dissertation, we propose several practical and novel variants of influence maximization problem: continuous activity maximization problem, budget profit maximization with coupon advertisement problem, adaptive multi-feature budgeted profit maximization problem, and learning-based influence maximization problem. Due to their NP-hardness, we focused on designing approximation algorithms and the deep reinforcement learning model. Reverse influence sampling and deep Q-networks techniques are utilized to overcome the #P-hardness of computing the objective functions and to solve the problems more efficiently.