Contrastive Graph Prompt-tuning for Cross-domain Recommendation
Zixuan Yi, Iadh Ounis, Craig Macdonald- Computer Science Applications
- General Business, Management and Accounting
- Information Systems
Recommender systems commonly suffer from the long-standing data sparsity problem where insufficient user-item interaction data limits the systems’ ability to make accurate recommendations. This problem can be alleviated using cross-domain recommendation techniques. In particular, in a cross-domain setting, knowledge sharing between domains permits improved effectiveness on the target domain. While recent cross-domain recommendation techniques used a pre-training configuration, we argue that such techniques lead to a low fine-tuning efficiency, especially when using large neural models. In recent language models, prompts have been used for parameter-efficient and time-efficient tuning of the models on the downstream tasks - these prompts represent a tunable latent vector that permits to freeze the rest of the language model’s parameters. To address the cross-domain recommendation task in an efficient manner, we propose a novel Personalised Graph Prompt-based Recommendation (PGPRec) framework, which leverages the efficiency benefits from prompt-tuning. In such a framework, we develop personalised and item-wise graph prompts based on relevant items to those items the user has interacted with. In particular, we apply Contrastive Learning (CL) to generate the pre-trained embeddings, to allow an increased generalisability in the pre-training stage and to ensure an effective prompt-tuning stage. To evaluate the effectiveness of our PGPRec framework in a cross-domain setting, we conduct an extensive evaluation with the top- k recommendation task and perform a cold-start analysis. The obtained empirical results on four Amazon Review datasets show that our proposed PGPRec framework can reduce up to 74% of the tuned parameters with a competitive performance and achieves an 11.41% improved performance compared to the strongest baseline in a cold-start scenario.