Distributionally Robust Sequential Recommendation

 

1.Abstract

Modeling user sequential behaviors have been proven effective in promoting the recommendation performance. While previous work has achieved remarkable successes, they mostly assume that the training and testing distributions are consistent, which may contradict with the dynamic nature of the user preference, and lead to lowered recommendation performance. To alleviate this problem, in this paper, we propose a robust sequential recommender framework to overcome the distribution shift. In specific, we first simulate different distributions by reweighting the training samples. Then, the maximum loss induced by the various sample distributions is minimized to optimize the 'worst-case' model for improving the robustness. Considering that there can be too many sample weights, which may introduce too much flexibility and be hard to converge, we cluster the training samples based on both hard and soft strategies, and assign each cluster with a unified weight. At last, we analyze our framework by presenting the generalization error bound of the above minimax objective, which is expected to better understand our framework from the theory perspective. We conduct extensive experiments based on three real-world datasets to demonstrate the effectiveness of our proposed framework. Empirical results suggest that our framework can on average improve the performance by about 2.27% and 3.51% on Recall and NDCG respectively.

 

 

2.Motivating Examples

Figure 1: (a) Example of the user preference shift from digital products to sports items.

(b) and (c) Examples of the user preference changes on the item brands and colors

 

 

3.Contributions

In a summary, the main contributions of this paper are presented as follows:

 

4.Code and Datasets

 

4.1 Code files link:One Drive

datasetsDataset folder
modelsBasemodel and our framework program
utilityUtils program
run_RSR.pyQuick start program
main.pyTraining program

 

4.1 Datasets overview link:One Drive

Dataset# User# Item# InteractionsAve.SlSparsity
Sports35,59818,357286,2078.0299.95%
Toys19,41211,924156,0728.0499.93%
Yelp30,43120,033282,3999.2899.95%

 

 

5.Usage

  1. Download the code.
  2. Download the dataset, and put it into the datasets folder.
  3. run run_RSR.py file.
  4. For example use soft-clustering framework and GRU4Rec as the basemodel on Sports with 10% noisy ratio.

 

 

6.Detailed parameter search ranges

We tune hyper-parameters according to the following table.

Hyper-parameterExplanationRange
lr_baseLearning rate of basemodel{0.1, 0.01, 0.001, 0.0001, 0.00001}
lr_wLearning rate of weight vector{0.1, 0.01, 0.001, 0.0001, 0.00001}
lr_fLearning rate of cluster component{0.1, 0.01, 0.001, 0.0001, 0.00001}
BatchSizeBatch Size{32, 64, 128, 256, 512, 1024}
KClass number{4, 8, 16, 32, 64, 128}
temperatureTemperature coefficient in softmax{0.01, 0.1, 0.2, 0.4, 0.6, 0.8, 1, 2, 100}
lambdaScale weight of Clustering Loss{0.0001, 0.0005,0.001,0.005, 0.01, 0.05}

 

 

7.Runtime Environment