Experiment

Experiment

class cornac.experiment.experiment.Experiment(eval_method, models, metrics, user_based=True, verbose=False)[source]

Experiment Class

Parameters:
  • eval_method (<cornac.eval_methods.BaseMethod>, required) – The evaluation method (e.g., RatioSplit).
  • models (array of <cornac.models.Recommender>, required) – A collection of recommender models to evaluate, e.g., [C2PF, HPF, PMF].
  • metrics (array of :obj:{<cornac.metrics.RatingMetric>, <cornac.metrics.RankingMetric>}, required) – A collection of metrics to use to evaluate the recommender models, e.g., [NDCG, MRR, Recall].
  • user_based (bool, optional, default: True) – This parameter is only useful if you are considering rating metrics. When True, first the average performance for every user is computed, then the obtained values are averaged to return the final result. If False, results will be averaged over the number of ratings.
  • result (array of <cornac.experiment.result.Result>, default: None) – This attribute contains the results per-model of your experiment, initially it is set to None.

Result

class cornac.experiment.result.Result(model_name, metric_avg_results, metric_user_results)[source]

Result Class for a single model

Parameters:
  • model_name (string, required) – The name of the recommender model.
  • metric_avg_results (OrderedDict, required) – A dictionary containing the average result per-metric.
  • metric_user_results (defaultdict, required) – A dictionary containing the average result per-user across different metrics.