abl - INFO - loop(train) [1/3] segment(train) [1/10]
abl - INFO - model loss: 0.00024
abl - INFO - loop(train) [1/3] segment(train) [2/10]
abl - INFO - model loss: 0.00053
abl - INFO - model loss: 0.00011
abl - INFO - loop(train) [1/3] segment(train) [3/10]
abl - INFO - model loss: 0.00260
abl - INFO - model loss: 0.00332
abl - INFO - loop(train) [1/3] segment(train) [4/10]
abl - INFO - model loss: 0.00162
abl - INFO - model loss: 0.00218
abl - INFO - loop(train) [1/3] segment(train) [5/10]
abl - INFO - model loss: 0.00073
abl - INFO - model loss: 0.00162
abl - INFO - loop(train) [1/3] segment(train) [6/10]
abl - INFO - model loss: 0.00055
abl - INFO - model loss: 0.00140
abl - INFO - loop(train) [1/3] segment(train) [7/10]
abl - INFO - model loss: 0.00148
abl - INFO - model loss: 0.00736
abl - INFO - loop(train) [1/3] segment(train) [8/10]
abl - INFO - model loss: 0.00034
abl - INFO - model loss: 0.00532
abl - INFO - loop(train) [1/3] segment(train) [9/10]
abl - INFO - model loss: 0.00167
abl - INFO - model loss: 0.00504
abl - INFO - loop(train) [1/3] segment(train) [10/10]
abl - INFO - model loss: 0.00185
abl - INFO - Evaluation start: loop(val) [1]
abl - INFO - Evaluation ended, hwf/character_accuracy: 1.000 hwf/reasoning_accuracy: 0.999
abl - INFO - Saving model: loop(save) [1]
abl - INFO - Checkpoints will be saved to weights_dir/model_checkpoint_loop_1.pth
abl - INFO - model loss: 0.00259
abl - INFO - Eval start: loop(val) [1]
abl - INFO - Evaluation ended, hwf/character_accuracy: 0.997 hwf/reasoning_accuracy: 0.985
abl - INFO - loop(train) [2/3] segment(train) [1/10]
abl - INFO - model loss: 0.00219
abl - INFO - loop(train) [2/3] segment(train) [2/10]
abl - INFO - model loss: 0.00069
abl - INFO - loop(train) [2/3] segment(train) [3/10]
abl - INFO - model loss: 0.00013
abl - INFO - loop(train) [2/3] segment(train) [4/10]
abl - INFO - model loss: 0.00013
abl - INFO - loop(train) [2/3] segment(train) [5/10]
abl - INFO - model loss: 0.00248
abl - INFO - loop(train) [2/3] segment(train) [6/10]
abl - INFO - model loss: 0.00010
abl - INFO - loop(train) [2/3] segment(train) [7/10]
abl - INFO - model loss: 0.00020
abl - INFO - loop(train) [2/3] segment(train) [8/10]
abl - INFO - model loss: 0.00076
abl - INFO - loop(train) [2/3] segment(train) [9/10]
abl - INFO - model loss: 0.00061
abl - INFO - loop(train) [2/3] segment(train) [10/10]
abl - INFO - model loss: 0.00117
abl - INFO - Evaluation start: loop(val) [2]
abl - INFO - Evaluation ended, hwf/character_accuracy: 1.000 hwf/reasoning_accuracy: 1.000
abl - INFO - Saving model: loop(save) [2]
abl - INFO - Checkpoints will be saved to weights_dir/model_checkpoint_loop_2.pth
abl - INFO - model loss: 0.00126
...
abl - INFO - Eval start: loop(val) [2]
abl - INFO - Evaluation ended, hwf/character_accuracy: 0.998 hwf/reasoning_accuracy: 0.989
abl - INFO - loop(train) [3/3] segment(train) [1/10]
abl - INFO - model loss: 0.00120
abl - INFO - loop(train) [3/3] segment(train) [2/10]
abl - INFO - model loss: 0.00114
abl - INFO - loop(train) [3/3] segment(train) [3/10]
abl - INFO - model loss: 0.00071
abl - INFO - loop(train) [3/3] segment(train) [4/10]
abl - INFO - model loss: 0.00027
abl - INFO - loop(train) [3/3] segment(train) [5/10]
abl - INFO - model loss: 0.00017
abl - INFO - loop(train) [3/3] segment(train) [6/10]
abl - INFO - model loss: 0.00018
abl - INFO - loop(train) [3/3] segment(train) [7/10]
abl - INFO - model loss: 0.00141
abl - INFO - loop(train) [3/3] segment(train) [8/10]
abl - INFO - model loss: 0.00099
abl - INFO - loop(train) [3/3] segment(train) [9/10]
abl - INFO - model loss: 0.00145
abl - INFO - loop(train) [3/3] segment(train) [10/10]
abl - INFO - model loss: 0.00215
abl - INFO - Evaluation start: loop(val) [3]
abl - INFO - Evaluation ended, hwf/character_accuracy: 1.000 hwf/reasoning_accuracy: 1.000
abl - INFO - Saving model: loop(save) [3]
abl - INFO - Checkpoints will be saved to weights_dir/model_checkpoint_loop_2.pth
abl - INFO - Evaluation ended, hwf/character_accuracy: 0.996 hwf/reasoning_accuracy: 0.977
abl - INFO - model loss: 0.00030
...
abl - INFO - Eval start: loop(val) [3]
abl - INFO - Evaluation ended, hwf/character_accuracy: 0.999 hwf/reasoning_accuracy: 0.996
abl - INFO - Test start:
abl - INFO - Evaluation ended, hwf/character_accuracy: 0.997 hwf/reasoning_accuracy: 0.986
Performance
-----------
We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), and the training time (to achieve the accuracy using all equation lengths). These results are compared with the following methods:
- `DeepProbLog <https://github.com/ML-KULeuven/deepproblog/tree/master>`_: An extension of ProbLog by introducing neural predicates in Probabilistic Logic Programming;
- `DeepStochLog <https://github.com/ML-KULeuven/deepstochlog/tree/main>`_: A neural-symbolic framework based on stochastic logic program;
- `NGS <https://github.com/liqing-ustc/NGS>`_: A neural-symbolic framework that uses a grammar model and a back-search algorithm to improve its computing process.
"Note: The symbols in the HWF dataset can be one of digits or operators '+', '-', '×', '÷'. \n",
"\n",
"Note: We may see that, in the 1001st data example, the length of the formula is 3, while in the 3001st data example, the length of the formula is 5. In the HWF dataset, the length of the formula varies from 1 to 7."
"Note: We may see that, in the 1001st data example, the length of the formula is 3, while in the 3001st data example, the length of the formula is 5. In the HWF dataset, the lengths of the formulas are 1, 3, 5, and 7 (Specifically, 10% of the equations have a length of 1, 10% have a length of 3, 20% have a length of 5, and 60% have a length of 7)."
"We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), and the training time (to achieve the accuracy using all equation lengths). These results are compared with the following methods:\n",
"\n",
"- [**DeepProbLog**](https://github.com/ML-KULeuven/deepproblog/tree/master): An extension of ProbLog by introducing neural predicates in Probabilistic Logic Programming;\n",
"\n",
"- [**DeepStochLog**](https://github.com/ML-KULeuven/deepstochlog/tree/main): A neural-symbolic framework based on stochastic logic program;\n",
"\n",
"- [**NGS**](https://github.com/liqing-ustc/NGS): A neural-symbolic framework that uses a grammar model and a back-search algorithm to improve its computing process."
" <th rowspan=\"2\">Training Time (s)<br><span style=\"font-weight: normal; font-size: smaller;\">(to achieve the Acc. using all lengths)</span></th>\n",