Model | AGIEval | GPT4All | TruthfulQA | Bigbench | Average |
---|---|---|---|---|---|
Llama-3-Smaug-8B | 37.15 | 69.12 | 51.66 | 40.67 | 49.65 |
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
agieval_aqua_rat | 0 | acc | 24.80 | ± | 2.72 |
acc_norm | 22.83 | ± | 2.64 | ||
agieval_logiqa_en | 0 | acc | 34.87 | ± | 1.87 |
acc_norm | 34.72 | ± | 1.87 | ||
agieval_lsat_ar | 0 | acc | 23.48 | ± | 2.80 |
acc_norm | 21.30 | ± | 2.71 | ||
agieval_lsat_lr | 0 | acc | 40.00 | ± | 2.17 |
acc_norm | 39.02 | ± | 2.16 | ||
agieval_lsat_rc | 0 | acc | 52.42 | ± | 3.05 |
acc_norm | 46.47 | ± | 3.05 | ||
agieval_sat_en | 0 | acc | 63.59 | ± | 3.36 |
acc_norm | 60.68 | ± | 3.41 | ||
agieval_sat_en_without_passage | 0 | acc | 38.83 | ± | 3.40 |
acc_norm | 34.47 | ± | 3.32 | ||
agieval_sat_math | 0 | acc | 41.36 | ± | 3.33 |
acc_norm | 37.73 | ± | 3.28 |
Average: 37.15%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
arc_challenge | 0 | acc | 51.28 | ± | 1.46 |
acc_norm | 53.84 | ± | 1.46 | ||
arc_easy | 0 | acc | 80.43 | ± | 0.81 |
acc_norm | 76.14 | ± | 0.87 | ||
boolq | 1 | acc | 80.95 | ± | 0.69 |
hellaswag | 0 | acc | 58.39 | ± | 0.49 |
acc_norm | 77.56 | ± | 0.42 | ||
openbookqa | 0 | acc | 34.40 | ± | 2.13 |
acc_norm | 43.60 | ± | 2.22 | ||
piqa | 0 | acc | 79.11 | ± | 0.95 |
acc_norm | 78.56 | ± | 0.96 | ||
winogrande | 0 | acc | 73.16 | ± | 1.25 |
Average: 69.12%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
truthfulqa_mc | 1 | mc1 | 35.74 | ± | 1.68 |
mc2 | 51.66 | ± | 1.52 |
Average: 51.66%
Task | Version | Metric | Value | Stderr | |
---|---|---|---|---|---|
bigbench_causal_judgement | 0 | multiple_choice_grade | 52.11 | ± | 3.63 |
bigbench_date_understanding | 0 | multiple_choice_grade | 67.75 | ± | 2.44 |
bigbench_disambiguation_qa | 0 | multiple_choice_grade | 31.78 | ± | 2.90 |
bigbench_geometric_shapes | 0 | multiple_choice_grade | 13.65 | ± | 1.81 |
exact_str_match | 0.00 | ± | 0.00 | ||
bigbench_logical_deduction_five_objects | 0 | multiple_choice_grade | 29.80 | ± | 2.05 |
bigbench_logical_deduction_seven_objects | 0 | multiple_choice_grade | 20.14 | ± | 1.52 |
bigbench_logical_deduction_three_objects | 0 | multiple_choice_grade | 51.00 | ± | 2.89 |
bigbench_movie_recommendation | 0 | multiple_choice_grade | 33.00 | ± | 2.10 |
bigbench_navigate | 0 | multiple_choice_grade | 51.70 | ± | 1.58 |
bigbench_reasoning_about_colored_objects | 0 | multiple_choice_grade | 58.40 | ± | 1.10 |
bigbench_ruin_names | 0 | multiple_choice_grade | 45.09 | ± | 2.35 |
bigbench_salient_translation_error_detection | 0 | multiple_choice_grade | 36.47 | ± | 1.52 |
bigbench_snarks | 0 | multiple_choice_grade | 51.38 | ± | 3.73 |
bigbench_sports_understanding | 0 | multiple_choice_grade | 50.30 | ± | 1.59 |
bigbench_temporal_sequences | 0 | multiple_choice_grade | 49.10 | ± | 1.58 |
bigbench_tracking_shuffled_objects_five_objects | 0 | multiple_choice_grade | 22.88 | ± | 1.19 |
bigbench_tracking_shuffled_objects_seven_objects | 0 | multiple_choice_grade | 16.46 | ± | 0.89 |
bigbench_tracking_shuffled_objects_three_objects | 0 | multiple_choice_grade | 51.00 | ± | 2.89 |
Average: 40.67%
Average score: 49.65%
Elapsed time: 08:07:52