Browse Source

[DOC] support latex

pull/1/head
troyyyyy 1 year ago
parent
commit
b93907ea2c
5 changed files with 62 additions and 25 deletions
  1. +1
    -1
      docs/Brief-Introduction/Components.rst
  2. +1
    -1
      docs/Brief-Introduction/Usage.rst
  3. +48
    -17
      docs/Overview/Abductive Learning.rst
  4. +8
    -2
      docs/README.rst
  5. +4
    -4
      docs/index.rst

+ 1
- 1
docs/Brief-Introduction/Components.rst View File

@@ -1,5 +1,5 @@
Components Components
==================
==========


.. contents:: Table of Contents .. contents:: Table of Contents



+ 1
- 1
docs/Brief-Introduction/Usage.rst View File

@@ -1,5 +1,5 @@
Use ABL-Package Step by Step Use ABL-Package Step by Step
==================
============================


.. contents:: Table of Contents .. contents:: Table of Contents



+ 48
- 17
docs/Overview/Abductive Learning.rst View File

@@ -1,27 +1,58 @@
Abductive Learning Abductive Learning
================== ==================


Traditional supervised machine learning, e.g. classification, is predominantly data-driven. Here, a set of training examples \left\{\left(x_1, y_1\right), \ldots,\left(x_m, y_m\right)\right\} is given, where x_i \in \mathcal{X} is the i-th training instance, y_i \in \mathcal{Y} is the corresponding ground-truth label. These data are then used to train a classifier model f: \mathcal{X} \mapsto \mathcal{Y} to accurately predict the unseen data.

(可能加一张图,比如左边是ML,右边是ML+KB)

In Abductive Learning (ABL), we assume that, in addition to data as examples, there is also a knowledge base \mathcal{KB} containing domain knowledge at our disposal. We aim for the classifier f: \mathcal{X} \mapsto \mathcal{Y} to make correct predictions on unseen data, and meanwhile, the logical facts grounded by \left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\} should be compatible with \mathcal{KB}.

The process of ABL is as follows:

1. Upon receiving data inputs \left\{x_1,\dots,x_m\right\}, pseudo-labels \left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\} are obtained, predicted by a data-driven classifier model.
2. These pseudo-labels are then converted into logical facts \mathcal{O} that are acceptable for logical reasoning.
3. Conduct joint reasoning with \mathcal{KB} to find any inconsistencies.
4. If found, the logical facts contributing to minimal inconsistency can be identified and then modified through abductive reasoning, returning modified logical facts \Delta(\mathcal{O}) compatible with \mathcal{KB}.
5. These modified logical facts are converted back to the form of pseudo-labels, and used for further learning of the classifier.
6. As a result, the classifier is updated and replaces the previous one in the next iteration.

This process is repeated until the classifier is no longer updated, or the logical facts \mathcal{O} are compatible with the knowledge base.
Traditional supervised machine learning, e.g. classification, is
predominantly data-driven. Here, a set of training examples
:math:`\left\{\left(x_1, y_1\right), \ldots,\left(x_m, y_m\right)\right\}`
is given, where :math:`x_i \in \mathcal{X}` is the :math:`i`-th training
instance, :math:`y_i \in \mathcal{Y}` is the corresponding ground-truth
label. These data are then used to train a classifier model :math:`f:
\mathcal{X} \mapsto \mathcal{Y}` to accurately predict the unseen data.

In **Abductive Learning (ABL)**, we assume that, in addition to data as
examples, there is also a knowledge base :math:`\mathcal{KB}` containing
domain knowledge at our disposal. We aim for the classifier :math:`f:
\mathcal{X} \mapsto \mathcal{Y}` to make correct predictions on unseen
data, and meanwhile, the logical facts grounded by
:math:`\left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\}`
should be compatible with :math:`\mathcal{KB}`.

The process of ABL is as follows:

1. Upon receiving data inputs :math:`\left\{x_1,\dots,x_m\right\}`,
pseudo-labels
:math:`\left\{f(\boldsymbol{x}_1), \ldots, f(\boldsymbol{x}_m)\right\}`
are predicted by a data-driven classifier model.
2. These pseudo-labels are then converted into logical facts
:math:`\mathcal{O}` that are acceptable for logical reasoning.
3. Conduct joint reasoning with :math:`\mathcal{KB}` to find any
inconsistencies. If found, the logical facts that lead to minimal
inconsistency can be identified.
4. Modify the identified facts through abductive reasoning, returning
revised logical facts :math:`\Delta(\mathcal{O})` which are
compatible with :math:`\mathcal{KB}`.
5. These revised logical facts are converted back to the form of
pseudo-labels, and used for further learning of the classifier.
6. As a result, the classifier is updated and replaces the previous one
in the next iteration.

This process is repeated until the classifier is no longer updated, or
the logical facts :math:`\mathcal{O}` are compatible with the knowledge
base.


The following figure illustrates this process: The following figure illustrates this process:


一张图 一张图


We can observe that in the above figure, the left half involves machine learning, while the right half involves logical reasoning. Thus, the entire abductive learning process is a continuous cycle of machine learning and logical reasoning. This effectively form a dual-driven (data & knowledge driven) learning system, integrating and balancing the use of machine learning and logical reasoning in a unified model.
We can observe that in the above figure, the left half involves machine
learning, while the right half involves logical reasoning. Thus, the
entire abductive learning process is a continuous cycle of machine
learning and logical reasoning. This effectively forms a paradigm that
is dual-driven by both data and domain knowledge, integrating and
balancing the use of machine learning and logical reasoning in a unified
model.

What is Abductive Reasoning?
^^^^^^^^^^^^^^^^^^^^^^^^^^^^





+ 8
- 2
docs/README.rst View File

@@ -1,7 +1,12 @@
ABL-Package ABL-Package
=========== ===========


ABL-Package is an open source library for Abductive Learning that supports building a model leveraging information from both data and (logical) domain knowledge. Using ABL-Package, users may form a dual-driven (data & knowledge driven) learning system, integrating and balancing the use of machine learning and logical reasoning in a unified model.
**ABL-Package** is an open source library for **Abductive Learning**
that supports building a model leveraging information from both data and
(logical) domain knowledge. Using ABL-Package, users may form a
dual-driven (data & knowledge driven) learning system, integrating and
balancing the use of machine learning and logical reasoning in a unified
model.


插一张图片 插一张图片


@@ -14,7 +19,8 @@ ABL is distributed on PyPI and can be installed with ``pip``:


$ pip install abl $ pip install abl


Alternatively, to install ABL by source code, download this project and sequentially run following commands in your terminal/command line.
Alternatively, to install ABL by source code, download this project and
sequentially run following commands in your terminal/command line.


.. code:: console .. code:: console




+ 4
- 4
docs/index.rst View File

@@ -1,7 +1,7 @@
.. include:: README.rst .. include:: README.rst


.. toctree:: .. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: Overview :caption: Overview


Overview/Abductive Learning Overview/Abductive Learning
@@ -9,14 +9,14 @@
Overview/Installation Overview/Installation


.. toctree:: .. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: A Brief Introduction :caption: A Brief Introduction


Brief-Introduction/Components Brief-Introduction/Components
Brief-Introduction/Usage Brief-Introduction/Usage


.. toctree:: .. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: Examples :caption: Examples


Examples/MNISTAdd Examples/MNISTAdd
@@ -37,7 +37,7 @@
API/abl.utils API/abl.utils


.. toctree:: .. toctree::
:maxdepth: 2
:maxdepth: 1
:caption: References :caption: References


References References


Loading…
Cancel
Save