Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
Llama-3-SLERP-8B | 36.82 | 72.03 | 49.52 | 40.88 | 49.81 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 27.56 | ± | 2.81 |
acc_norm | 24.02 | ± | 2.69 | ||
agieval_logiqa_en | 0 | acc | 34.41 | ± | 1.86 |
acc_norm | 37.63 | ± | 1.90 | ||
agieval_lsat_ar | 0 | acc | 18.26 | ± | 2.55 |
acc_norm | 18.26 | ± | 2.55 | ||
agieval_lsat_lr | 0 | acc | 40.20 | ± | 2.17 |
acc_norm | 37.84 | ± | 2.15 | ||
agieval_lsat_rc | 0 | acc | 53.90 | ± | 3.04 |
acc_norm | 49.07 | ± | 3.05 | ||
agieval_sat_en | 0 | acc | 64.56 | ± | 3.34 |
acc_norm | 57.77 | ± | 3.45 | ||
agieval_sat_en_without_passage | 0 | acc | 38.83 | ± | 3.40 |
acc_norm | 35.44 | ± | 3.34 | ||
agieval_sat_math | 0 | acc | 42.27 | ± | 3.34 |
acc_norm | 34.55 | ± | 3.21 |
Average: 36.82%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 54.69 | ± | 1.45 |
acc_norm | 57.94 | ± | 1.44 | ||
arc_easy | 0 | acc | 83.12 | ± | 0.77 |
acc_norm | 81.99 | ± | 0.79 | ||
boolq | 1 | acc | 84.53 | ± | 0.63 |
hellaswag | 0 | acc | 60.52 | ± | 0.49 |
acc_norm | 79.21 | ± | 0.40 | ||
openbookqa | 0 | acc | 33.40 | ± | 2.11 |
acc_norm | 45.20 | ± | 2.23 | ||
piqa | 0 | acc | 79.82 | ± | 0.94 |
acc_norm | 81.39 | ± | 0.91 | ||
winogrande | 0 | acc | 73.95 | ± | 1.23 |
Average: 72.03%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 32.68 | ± | 1.64 |
mc2 | 49.52 | ± | 1.46 |
Average: 49.52%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 56.32 | ± | 3.61 |
bigbench_date_understanding | 0 | multiple_choice_grade | 69.38 | ± | 2.40 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 29.84 | ± | 2.85 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 31.48 | ± | 2.45 |
exact_str_match | 0.00 | ± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 28.00 | ± | 2.01 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 21.00 | ± | 1.54 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 50.00 | ± | 2.89 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 32.60 | ± | 2.10 |
bigbench_navigate | 0 | multiple_choice_grade | 53.90 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 63.60 | ± | 1.08 |
bigbench_ruin_names | 0 | multiple_choice_grade | 43.08 | ± | 2.34 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 26.25 | ± | 1.39 |
bigbench_snarks | 0 | multiple_choice_grade | 50.28 | ± | 3.73 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 50.51 | ± | 1.59 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 39.90 | ± | 1.55 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.56 | ± | 1.18 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 17.09 | ± | 0.90 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 50.00 | ± | 2.89 |
Average: 40.88%
Average score: 49.81%
Elapsed time: 02:44:15