Browse Source

Update metrics and figure in readme.

tags/v0.1.0
Ke Zhen 6 years ago
parent
commit
3ab8950d8d
4 changed files with 43 additions and 1 deletions
  1. +1
    -1
      README.md
  2. +42
    -0
      fastNLP/core/metrics.py
  3. BIN
      fastnlp-architecture.jpg
  4. BIN
      fastnlp-architecture.pdf

+ 1
- 1
README.md View File

@@ -5,7 +5,7 @@


fastNLP is a modular Natural Language Processing system based on PyTorch, for fast development of NLP tools. It divides the NLP model based on deep learning into different modules. These modules fall into 4 categories: encoder, interaction, aggregation and decoder, while each category contains different implemented modules. Encoder modules encode the input into some abstract representation, interaction modules make the information in the representation interact with each other, aggregation modules aggregate and reduce information, and decoder modules decode the representation into the output. Most current NLP models could be built on these modules, which vastly simplifies the process of developing NLP models. The architecture of fastNLP is as the figure below: fastNLP is a modular Natural Language Processing system based on PyTorch, for fast development of NLP tools. It divides the NLP model based on deep learning into different modules. These modules fall into 4 categories: encoder, interaction, aggregation and decoder, while each category contains different implemented modules. Encoder modules encode the input into some abstract representation, interaction modules make the information in the representation interact with each other, aggregation modules aggregate and reduce information, and decoder modules decode the representation into the output. Most current NLP models could be built on these modules, which vastly simplifies the process of developing NLP models. The architecture of fastNLP is as the figure below:


![](https://github.com/fastnlp/fastNLP/raw/master/fastnlp-architecture.pdf)
![](https://github.com/fastnlp/fastNLP/raw/master/fastnlp-architecture.jpg)




## Requirements ## Requirements


+ 42
- 0
fastNLP/core/metrics.py View File

@@ -173,6 +173,48 @@ def classification_report(y_true, y_pred, labels=None, target_names=None, digits
raise NotImplementedError raise NotImplementedError




def accuracy_topk(y_true, y_prob, k=1):
"""
Compute accuracy of y_true matching top-k probable
labels in y_prob.

Paras:
y_ture - ndarray, true label, [n_samples]
y_prob - ndarray, label probabilities, [n_samples, n_classes]
k - int, k in top-k
Returns:
accuracy of top-k
"""

y_pred_topk = np.argsort(y_prob, axis=-1)[:, -1:-k - 1:-1]
y_true_tile = np.tile(np.expand_dims(y_true, axis=1), (1, k))
y_match = np.any(y_pred_topk == y_true_tile, axis=-1)
acc = np.sum(y_match) / y_match.shape[0]

return acc


def pred_topk(y_prob, k=1):
"""
Return top-k predicted labels and corresponding probabilities.

Args:
y_prob - ndarray, size [n_samples, n_classes], probabilities on labels
k - int, k of top-k
Returns:
y_pred_topk - ndarray, size [n_samples, k], predicted top-k labels
y_prob_topk - ndarray, size [n_samples, k], probabilities for
top-k labels
"""

y_pred_topk = np.argsort(y_prob, axis=-1)[:, -1:-k - 1:-1]
x_axis_index = np.tile(
np.arange(len(y_prob))[:, np.newaxis],
(1, k))
y_prob_topk = y_prob[x_axis_index, y_pred_topk]
return y_pred_topk, y_prob_topk


if __name__ == '__main__': if __name__ == '__main__':
y = np.array([1, 0, 1, 0, 1, 1]) y = np.array([1, 0, 1, 0, 1, 1])
print(_label_types(y)) print(_label_types(y))

BIN
fastnlp-architecture.jpg View File

Before After
Width: 960  |  Height: 540  |  Size: 36 kB

BIN
fastnlp-architecture.pdf View File


Loading…
Cancel
Save