Browse Source

Merge pull request #8 from AbductiveLearning/Dev

Add details on experiment performance
main
Wen-Chao Hu GitHub 1 year ago
parent
commit
0f4417f65e
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
8 changed files with 166 additions and 19 deletions
  1. +1
    -1
      README.md
  2. +40
    -1
      docs/Examples/HWF.rst
  3. +47
    -1
      docs/Examples/MNISTAdd.rst
  4. +1
    -1
      docs/index.rst
  5. +29
    -1
      examples/hwf/README.md
  6. +6
    -13
      examples/hwf/hwf.ipynb
  7. +36
    -1
      examples/mnist_add/README.md
  8. +6
    -0
      examples/mnist_add/mnist_add.ipynb

+ 1
- 1
README.md View File

@@ -19,7 +19,7 @@
Key Features of ABLkit:

- **High Flexibility**: Compatible with various machine learning modules and logical reasoning components.
- **User-Friendly Interface**: Provide data, model, and knowledge, and get started with just a few lines of code.
- **Easy-to-Use Interface**: Provide data, model, and knowledge, and get started with just a few lines of code.
- **Optimized Performance**: Optimization for high performance and accelerated training speed.

ABLkit encapsulates advanced ABL techniques, providing users with an efficient and convenient toolkit to develop dual-driven ABL systems, which leverage the power of both data and knowledge.


+ 40
- 1
docs/Examples/HWF.rst View File

@@ -422,10 +422,44 @@ Log:
abl - INFO - Test start:
abl - INFO - Evaluation ended, hwf/character_accuracy: 0.997 hwf/reasoning_accuracy: 0.986

Environment
-----------

For all experiments, we used a single linux server. Details on the specifications are listed in the table below.

.. raw:: html

<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;margin-bottom:20px;}
.tg td, .tg th {border:1px solid #ddd;padding:8px 22px;text-align:center;}
.tg th {background-color:#f5f5f5;color:#333333;}
.tg tr:nth-child(even) {background-color:#f9f9f9;}
.tg tr:nth-child(odd) {background-color:#ffffff;}
</style>

<table class="tg" style="margin-left: auto; margin-right: auto;">
<thead>
<tr>
<th>CPU</th>
<th>GPU</th>
<th>Memory</th>
<th>OS</th>
</tr>
</thead>
<tbody>
<tr>
<td>2 * Xeon Platinum 8358, 32 Cores, 2.6 GHz Base Frequency</td>
<td>A100 80GB</td>
<td>512GB</td>
<td>Ubuntu 20.04</td>
</tr>
</tbody>
</table>

Performance
-----------

We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), and the training time (to achieve the accuracy using all equation lengths). These results are compared with the following methods:
We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), training time (to achieve the accuracy using all equation lengths), and average memory usage (using all equation lengths). These results are compared with the following methods:

- `NGS <https://github.com/liqing-ustc/NGS>`_: A neural-symbolic framework that uses a grammar model and a back-search algorithm to improve its computing process;

@@ -448,6 +482,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<th rowspan="2"></th>
<th colspan="5">Reasoning Accuracy<br><span style="font-weight: normal; font-size: smaller;">(for different equation lengths)</span></th>
<th rowspan="2">Training Time (s)<br><span style="font-weight: normal; font-size: smaller;">(to achieve the Acc. using all lengths)</span></th>
<th rowspan="2">Average Memory Usage (MB)<br><span style="font-weight: normal; font-size: smaller;">(using all lengths)</span></th>
</tr>
<tr>
<th>1</th>
@@ -466,6 +501,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td>5.2</td>
<td>98.4</td>
<td>426.2</td>
<td>3705</td>
</tr>
<tr>
<td>DeepProbLog</td>
@@ -475,6 +511,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td>timeout</td>
<td>timeout</td>
<td>timeout</td>
<td>4315</td>
</tr>
<tr>
<td>DeepStochLog</td>
@@ -484,6 +521,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td>timeout</td>
<td>timeout</td>
<td>timeout</td>
<td>4355</td>
</tr>
<tr>
<td>ABL</td>
@@ -493,6 +531,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td><span style="font-weight:bold">97.2</span></td>
<td><span style="font-weight:bold">98.6</span></td>
<td><span style="font-weight:bold">77.3</span></td>
<td><span style="font-weight:bold">3074</span></td>
</tr>
</tbody>
</table>

+ 47
- 1
docs/Examples/MNISTAdd.rst View File

@@ -371,6 +371,40 @@ Log:
abl - INFO - Evaluation ended, mnist_add/character_accuracy: 0.991 mnist_add/reasoning_accuracy: 0.980


Environment
-----------

For all experiments, we used a single linux server. Details on the specifications are listed in the table below.

.. raw:: html

<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;margin-bottom:20px;}
.tg td, .tg th {border:1px solid #ddd;padding:8px 22px;text-align:center;}
.tg th {background-color:#f5f5f5;color:#333333;}
.tg tr:nth-child(even) {background-color:#f9f9f9;}
.tg tr:nth-child(odd) {background-color:#ffffff;}
</style>

<table class="tg" style="margin-left: auto; margin-right: auto;">
<thead>
<tr>
<th>CPU</th>
<th>GPU</th>
<th>Memory</th>
<th>OS</th>
</tr>
</thead>
<tbody>
<tr>
<td>2 * Xeon Platinum 8358, 32 Cores, 2.6 GHz Base Frequency</td>
<td>A100 80GB</td>
<td>512GB</td>
<td>Ubuntu 20.04</td>
</tr>
</tbody>
</table>


Performance
-----------
@@ -379,6 +413,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (

- `NeurASP <https://github.com/azreasoners/NeurASP>`_: An extension of answer set programs by treating the neural network output as the probability distribution over atomic facts;
- `DeepProbLog <https://github.com/ML-KULeuven/deepproblog>`_: An extension of ProbLog by introducing neural predicates in Probabilistic Logic Programming;
- `LTN <https://github.com/logictensornetworks/logictensornetworks>`_: A neural-symbolic framework that uses differentiable first-order logic language to incorporate data and logic;
- `DeepStochLog <https://github.com/ML-KULeuven/deepstochlog>`_: A neural-symbolic framework based on stochastic logic program.

.. raw:: html
@@ -397,6 +432,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<th>Method</th>
<th>Accuracy</th>
<th>Time to achieve the Acc. (s)</th>
<th>Average Memory Usage (MB)</th>
</tr>
</thead>
<tbody>
@@ -404,21 +440,31 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td>NeurASP</td>
<td>96.2</td>
<td>966</td>
<td>3552</td>
</tr>
<tr>
<td>DeepProbLog</td>
<td>97.1</td>
<td>2045</td>
<td>3521</td>
</tr>
<tr>
<td>LTN</td>
<td>97.4</td>
<td>251</td>
<td>3860</td>
</tr>
<tr>
<td>DeepStochLog</td>
<td>97.5</td>
<td>257</td>
<td>3545</td>
</tr>
<tr>
<td>ABL</td>
<td><span style="font-weight:bold">98.1</span></td>
<td><span style="font-weight:bold">47</span></td>
<td><span style="font-weight:bold">47</span></td>
<td><span style="font-weight:bold">2482</span></td>
</tr>
</tbody>
</table>

+ 1
- 1
docs/index.rst View File

@@ -17,7 +17,7 @@ where both data and (logical) domain knowledge are available.
Key Features of ABLkit:

- **High Flexibility**: Compatible with various machine learning modules and logical reasoning components.
- **User-Friendly Interface**: Provide **data**, :blue-bold:`model`, and :green-bold:`knowledge`, and get started with just a few lines of code.
- **Easy-to-Use Interface**: Provide **data**, :blue-bold:`model`, and :green-bold:`knowledge`, and get started with just a few lines of code.
- **Optimized Performance**: Optimization for high performance and accelerated training speed.

ABLkit encapsulates advanced ABL techniques, providing users with


+ 29
- 1
examples/hwf/README.md View File

@@ -46,9 +46,32 @@ optional arguments:

```

## Environment

For all experiments, we used a single linux server. Details on the specifications are listed in the table below.

<table class="tg" style="margin-left: auto; margin-right: auto;">
<thead>
<tr>
<th>CPU</th>
<th>GPU</th>
<th>Memory</th>
<th>OS</th>
</tr>
</thead>
<tbody>
<tr>
<td>2 * Xeon Platinum 8358, 32 Cores, 2.6 GHz Base Frequency</td>
<td>A100 80GB</td>
<td>512GB</td>
<td>Ubuntu 20.04</td>
</tr>
</tbody>
</table>

## Performance

We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), and the training time (to achieve the accuracy using all equation lengths). These results are compared with the following methods:
We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), training time (to achieve the accuracy using all equation lengths), and average memory usage (using all equation lengths). These results are compared with the following methods:

- [**NGS**](https://github.com/liqing-ustc/NGS): A neural-symbolic framework that uses a grammar model and a back-search algorithm to improve its computing process;
- [**DeepProbLog**](https://github.com/ML-KULeuven/deepproblog/tree/master): An extension of ProbLog by introducing neural predicates in Probabilistic Logic Programming;
@@ -60,6 +83,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<th rowspan="2"></th>
<th colspan="5">Reasoning Accuracy<br><span style="font-weight: normal; font-size: smaller;">(for different equation lengths)</span></th>
<th rowspan="2">Training Time (s)<br><span style="font-weight: normal; font-size: smaller;">(to achieve the Acc. using all lengths)</span></th>
<th rowspan="2">Average Memory Usage (MB)<br><span style="font-weight: normal; font-size: smaller;">(using all lengths)</span></th>
</tr>
<tr>
<th>1</th>
@@ -78,6 +102,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td>5.2</td>
<td>98.4</td>
<td>426.2</td>
<td>3705</td>
</tr>
<tr>
<td>DeepProbLog</td>
@@ -87,6 +112,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td>timeout</td>
<td>timeout</td>
<td>timeout</td>
<td>4315</td>
</tr>
<tr>
<td>DeepStochLog</td>
@@ -96,6 +122,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td>timeout</td>
<td>timeout</td>
<td>timeout</td>
<td>4355</td>
</tr>
<tr>
<td>ABL</td>
@@ -105,6 +132,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td><span style="font-weight:bold">97.2</span></td>
<td><span style="font-weight:bold">99.2</span></td>
<td><span style="font-weight:bold">77.3</span></td>
<td><span style="font-weight:bold">3074</span></td>
</tr>
</tbody>
</table>


+ 6
- 13
examples/hwf/hwf.ipynb View File

@@ -434,7 +434,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), and the training time (to achieve the accuracy using all equation lengths). These results are compared with the following methods:\n",
"We present the results of ABL as follows, which include the reasoning accuracy (for different equation lengths in the HWF dataset), training time (to achieve the accuracy using all equation lengths), and average memory usage (using all equation lengths). These results are compared with the following methods:\n",
"\n",
"- [**NGS**](https://github.com/liqing-ustc/NGS): A neural-symbolic framework that uses a grammar model and a back-search algorithm to improve its computing process;\n",
"\n",
@@ -447,19 +447,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"<style type=\"text/css\">\n",
".tg {border-collapse:collapse;border-spacing:0;margin-bottom:20px;}\n",
".tg td, .tg th {border:1px solid #ddd;padding:10px 15px;text-align:center;}\n",
".tg th {background-color:#f5f5f5;color:#333333;}\n",
".tg tr:nth-child(even) {background-color:#f9f9f9;}\n",
".tg tr:nth-child(odd) {background-color:#ffffff;}\n",
"</style>\n",
"<table class=\"tg\" style=\"margin-left: auto; margin-right: auto;\">\n",
"<thead>\n",
" <tr>\n",
" <th rowspan=\"2\"></th>\n",
" <th colspan=\"5\">Reasoning Accuracy<br><span style=\"font-weight: normal; font-size: smaller;\">(for different equation lengths)</span></th>\n",
" <th rowspan=\"2\">Training Time (s)<br><span style=\"font-weight: normal; font-size: smaller;\">(to achieve the Acc. using all lengths)</span></th>\n",
" <th rowspan=\"2\">Average Memory Usage (MB)<br><span style=\"font-weight: normal; font-size: smaller;\">(using all lengths)</span></th>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
@@ -478,6 +472,7 @@
" <td>5.2</td>\n",
" <td>98.4</td>\n",
" <td>426.2</td>\n",
" <td>3705</td>\n",
" </tr>\n",
" <tr>\n",
" <td>DeepProbLog</td>\n",
@@ -487,6 +482,7 @@
" <td>timeout</td>\n",
" <td>timeout</td>\n",
" <td>timeout</td>\n",
" <td>4315</td>\n",
" </tr>\n",
" <tr>\n",
" <td>DeepStochLog</td>\n",
@@ -496,6 +492,7 @@
" <td>timeout</td>\n",
" <td>timeout</td>\n",
" <td>timeout</td>\n",
" <td>4355</td>\n",
" </tr>\n",
" <tr>\n",
" <td>ABL</td>\n",
@@ -505,16 +502,12 @@
" <td><span style=\"font-weight:bold\">97.2</span></td>\n",
" <td><span style=\"font-weight:bold\">99.2</span></td>\n",
" <td><span style=\"font-weight:bold\">77.3</span></td>\n",
" <td><span style=\"font-weight:bold\">3074</span></td>\n",
" </tr>\n",
"</tbody>\n",
"</table>\n",
"<p style=\"font-size: 13px;\">* timeout: need more than 1 hour to execute</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": []
}
],
"metadata": {


+ 36
- 1
examples/mnist_add/README.md View File

@@ -47,6 +47,29 @@ optional arguments:

```

## Environment

For all experiments, we used a single linux server. Details on the specifications are listed in the table below.

<table class="tg" style="margin-left: auto; margin-right: auto;">
<thead>
<tr>
<th>CPU</th>
<th>GPU</th>
<th>Memory</th>
<th>OS</th>
</tr>
</thead>
<tbody>
<tr>
<td>2 * Xeon Platinum 8358, 32 Cores, 2.6 GHz Base Frequency</td>
<td>A100 80GB</td>
<td>512GB</td>
<td>Ubuntu 20.04</td>
</tr>
</tbody>
</table>


## Performance

@@ -54,6 +77,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (

- [**NeurASP**](https://github.com/azreasoners/NeurASP): An extension of answer set programs by treating the neural network output as the probability distribution over atomic facts;
- [**DeepProbLog**](https://github.com/ML-KULeuven/deepproblog): An extension of ProbLog by introducing neural predicates in Probabilistic Logic Programming;
- [**LTN**](https://github.com/logictensornetworks/logictensornetworks): A neural-symbolic framework that uses differentiable first-order logic language to incorporate data and logic.
- [**DeepStochLog**](https://github.com/ML-KULeuven/deepstochlog): A neural-symbolic framework based on stochastic logic program.

<table class="tg" style="margin-left: auto; margin-right: auto;">
@@ -62,6 +86,7 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<th>Method</th>
<th>Accuracy</th>
<th>Time to achieve the Acc. (s)</th>
<th>Average Memory Usage (MB)</th>
</tr>
</thead>
<tbody>
@@ -69,21 +94,31 @@ We present the results of ABL as follows, which include the reasoning accuracy (
<td>NeurASP</td>
<td>96.2</td>
<td>966</td>
<td>3552</td>
</tr>
<tr>
<td>DeepProbLog</td>
<td>97.1</td>
<td>2045</td>
<td>3521</td>
</tr>
<tr>
<td>LTN</td>
<td>97.4</td>
<td>251</td>
<td>3860</td>
</tr>
<tr>
<td>DeepStochLog</td>
<td>97.5</td>
<td>257</td>
<td>3545</td>
</tr>
<tr>
<td>ABL</td>
<td><span style="font-weight:bold">98.1</span></td>
<td><span style="font-weight:bold">47</span></td>
<td><span style="font-weight:bold">47</span></td>
<td><span style="font-weight:bold">2482</span></td>
</tr>
</tbody>
</table>

+ 6
- 0
examples/mnist_add/mnist_add.ipynb View File

@@ -472,6 +472,7 @@
"We present the results of ABL as follows, which include the reasoning accuracy (the proportion of equations that are correctly summed), and the training time used to achieve this accuracy. These results are compared with the following methods:\n",
"- [**NeurASP**](https://github.com/azreasoners/NeurASP): An extension of answer set programs by treating the neural network output as the probability distribution over atomic facts;\n",
"- [**DeepProbLog**](https://github.com/ML-KULeuven/deepproblog): An extension of ProbLog by introducing neural predicates in Probabilistic Logic Programming;\n",
"- [**LTN**](https://github.com/logictensornetworks/logictensornetworks): A neural-symbolic framework that uses differentiable first-order logic language to incorporate data and logic.\n",
"- [**DeepStochLog**](https://github.com/ML-KULeuven/deepstochlog): A neural-symbolic framework based on stochastic logic program."
]
},
@@ -507,6 +508,11 @@
" <td>2045</td>\n",
"</tr>\n",
"<tr>\n",
" <td>LTN</td>\n",
" <td>97.4</td>\n",
" <td>251</td>\n",
"</tr>\n",
"<tr>\n",
" <td>DeepStochLog</td>\n",
" <td>97.5</td>\n",
" <td>257</td>\n",


Loading…
Cancel
Save