Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
OpenChat-3.5-7B-Solar | 41.61 | 71.99 | 46.7 | 43.01 | 50.83 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 28.35 | ± | 2.83 |
acc_norm | 28.35 | ± | 2.83 | ||
agieval_logiqa_en | 0 | acc | 37.02 | ± | 1.89 |
acc_norm | 39.02 | ± | 1.91 | ||
agieval_lsat_ar | 0 | acc | 25.22 | ± | 2.87 |
acc_norm | 23.48 | ± | 2.80 | ||
agieval_lsat_lr | 0 | acc | 47.84 | ± | 2.21 |
acc_norm | 43.92 | ± | 2.20 | ||
agieval_lsat_rc | 0 | acc | 57.62 | ± | 3.02 |
acc_norm | 56.51 | ± | 3.03 | ||
agieval_sat_en | 0 | acc | 75.24 | ± | 3.01 |
acc_norm | 73.30 | ± | 3.09 | ||
agieval_sat_en_without_passage | 0 | acc | 39.81 | ± | 3.42 |
acc_norm | 37.38 | ± | 3.38 | ||
agieval_sat_math | 0 | acc | 37.27 | ± | 3.27 |
acc_norm | 30.91 | ± | 3.12 |
Average: 41.61%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 56.23 | ± | 1.45 |
acc_norm | 59.81 | ± | 1.43 | ||
arc_easy | 0 | acc | 82.41 | ± | 0.78 |
acc_norm | 81.36 | ± | 0.80 | ||
boolq | 1 | acc | 86.79 | ± | 0.59 |
hellaswag | 0 | acc | 63.17 | ± | 0.48 |
acc_norm | 81.77 | ± | 0.39 | ||
openbookqa | 0 | acc | 29.80 | ± | 2.05 |
acc_norm | 40.40 | ± | 2.20 | ||
piqa | 0 | acc | 81.66 | ± | 0.90 |
acc_norm | 82.92 | ± | 0.88 | ||
winogrande | 0 | acc | 70.88 | ± | 1.28 |
Average: 71.99%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 31.58 | ± | 1.63 |
mc2 | 46.70 | ± | 1.51 |
Average: 46.7%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 57.37 | ± | 3.60 |
bigbench_date_understanding | 0 | multiple_choice_grade | 63.96 | ± | 2.50 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 60.47 | ± | 3.05 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 30.08 | ± | 2.42 |
exact_str_match | 26.74 | ± | 2.34 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 26.20 | ± | 1.97 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 19.86 | ± | 1.51 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 50.00 | ± | 2.89 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 37.40 | ± | 2.17 |
bigbench_navigate | 0 | multiple_choice_grade | 52.40 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 64.55 | ± | 1.07 |
bigbench_ruin_names | 0 | multiple_choice_grade | 43.53 | ± | 2.35 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 19.94 | ± | 1.27 |
bigbench_snarks | 0 | multiple_choice_grade | 65.19 | ± | 3.55 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 61.46 | ± | 1.55 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 30.90 | ± | 1.46 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 24.48 | ± | 1.22 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 16.46 | ± | 0.89 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 50.00 | ± | 2.89 |
Average: 43.01%
Average score: 50.83%
Elapsed time: 02:26:44