Browse Source

修改了 README.md

tags/v0.4.10
ChenXin 5 years ago
parent
commit
001c835e0b
7 changed files with 64 additions and 1556 deletions
  1. +61
    -47
      README.md
  2. +3
    -8
      tutorials/README.md
  3. +0
    -370
      tutorials/fastNLP_padding_tutorial.ipynb
  4. +0
    -751
      tutorials/fastnlp_10min_tutorial.ipynb
  5. +0
    -97
      tutorials/fastnlp_test_tutorial.ipynb
  6. +0
    -0
      tutorials/tutorial_1.ipynb
  7. +0
    -283
      tutorials/tutorial_for_developer.md

+ 61
- 47
README.md View File

@@ -6,94 +6,108 @@
![Hex.pm](https://img.shields.io/hexpm/l/plug.svg)
[![Documentation Status](https://readthedocs.org/projects/fastnlp/badge/?version=latest)](http://fastnlp.readthedocs.io/?badge=latest)

FastNLP is a modular Natural Language Processing system based on PyTorch, built for fast development of NLP models.
fastNLP 是一款轻量级的 NLP 处理套件。你既可以使用它快速地完成一个命名实体识别(NER)、中文分词或文本分类任务; 也可以使用他构建许多复杂的网络模型,进行科研。它具有如下的特性:

- 统一的Tabular式数据容器,让数据预处理过程简洁明了。内置多种数据集的DataSet Loader,省去预处理代码。
- 各种方便的NLP工具,例如预处理embedding加载; 中间数据cache等;
- 详尽的中文文档以供查阅;
- 提供诸多高级模块,例如Variational LSTM, Transformer, CRF等;
- 封装CNNText,Biaffine等模型可供直接使用;
- 便捷且具有扩展性的训练器; 提供多种内置callback函数,方便实验记录、异常捕获等。


## 安装指南

fastNLP 依赖如下包:

+ numpy
+ torch>=0.4.0
+ tqdm
+ nltk

其中torch的安装可能与操作系统及 CUDA 的版本相关,请参见 PyTorch 官网 。
在依赖包安装完成的情况,您可以在命令行执行如下指令完成安装

```shell
pip install fastNLP
```


## 内置组件

大部分用于的 NLP 任务神经网络都可以看做由编码(encoder)、聚合(aggregator)、解码(decoder)三种模块组成。


![](./docs/source/figures/text_classification.png)

fastNLP 在 modules 模块中内置了三种模块的诸多组件,可以帮助用户快速搭建自己所需的网络。 三种模块的功能和常见组件如下:

A deep learning NLP model is the composition of three types of modules:
<table>
<tr>
<td><b> module type </b></td>
<td><b> functionality </b></td>
<td><b> example </b></td>
<td><b> 类型 </b></td>
<td><b> 功能 </b></td>
<td><b> 例子 </b></td>
</tr>
<tr>
<td> encoder </td>
<td> encode the input into some abstract representation </td>
<td> 将输入编码为具有具 有表示能力的向量 </td>
<td> embedding, RNN, CNN, transformer
</tr>
<tr>
<td> aggregator </td>
<td> aggregate and reduce information </td>
<td> 从多个向量中聚合信息 </td>
<td> self-attention, max-pooling </td>
</tr>
<tr>
<td> decoder </td>
<td> decode the representation into the output </td>
<td> 将具有某种表示意义的 向量解码为需要的输出 形式 </td>
<td> MLP, CRF </td>
</tr>
</table>

For example:

![](docs/source/figures/text_classification.png)

## Requirements

- Python>=3.6
- numpy>=1.14.2
- torch>=0.4.0
- tensorboardX
- tqdm>=4.28.1

## 完整模型
fastNLP 为不同的 NLP 任务实现了许多完整的模型,它们都经过了训练和测试。

## Resources
你可以在以下两个地方查看相关信息
- [介绍](reproduction/)
- [源码](fastNLP/models/)

- [Tutorials](https://github.com/fastnlp/fastNLP/tree/master/tutorials)
- [Documentation](https://fastnlp.readthedocs.io/en/latest/)
- [Source Code](https://github.com/fastnlp/fastNLP)
## 项目结构

![](./docs/source/figures/workflow.png)

## Installation
Run the following commands to install fastNLP package.
```shell
pip install fastNLP
```

## Models
fastNLP implements different models for variant NLP tasks.
Each model has been trained and tested carefully.

Check out models' performance, usage and source code here.
- [Documentation](reproduction/)
- [Source Code](fastNLP/models/)

## Project Structure
fastNLP的大致工作流程如上图所示,而项目结构如下:

<table>
<tr>
<td><b> fastNLP </b></td>
<td> an open-source NLP library </td>
</tr>
<tr>
<td><b> fastNLP.api </b></td>
<td> APIs for end-to-end prediction </td>
<td> 开源的自然语言处理库 </td>
</tr>
<tr>
<td><b> fastNLP.core </b></td>
<td> data representation & train/test procedure </td>
<td> 实现了核心功能,包括数据处理组件、训练器、测速器等 </td>
</tr>
<tr>
<td><b> fastNLP.models </b></td>
<td> a collection of NLP models </td>
<td> 实现了一些完整的神经网络模型 </td>
</tr>
<tr>
<td><b> fastNLP.modules </b></td>
<td> a collection of PyTorch sub-models/components/wheels </td>
<td> 实现了用于搭建神经网络模型的诸多组件 </td>
</tr>
<tr>
<td><b> fastNLP.io </b></td>
<td> readers & savers </td>
<td> 实现了读写功能,包括数据读入,模型读写等 </td>
</tr>
</table>

## 参考资源

- [教程](https://github.com/fastnlp/fastNLP/tree/master/tutorials)
- [文档](https://fastnlp.readthedocs.io/en/latest/)
- [源码](https://github.com/fastnlp/fastNLP)



*In memory of @FengZiYjun. May his soul rest in peace. We will miss you very very much!*

+ 3
- 8
tutorials/README.md View File

@@ -1,12 +1,7 @@
# fastNLP 教程

### 上手教程 Quick Start
- 一分钟上手:`fastnlp_1min_tutorial.ipynb` [Click Here](https://github.com/fastnlp/fastNLP/tree/master/tutorials/fastnlp_1min_tutorial.ipynb)
- 十分钟上手:`fastnlp_10min_tutorial.ipynb` [Click Here](https://github.com/fastnlp/fastNLP/tree/master/tutorials/fastnlp_10min_tutorial.ipynb)
`quickstart.ipynb` [Click Here](https://github.com/fastnlp/fastNLP/tree/master/tutorials/quickstart.ipynb)

### 进阶教程 Advanced Tutorial
- `fastnlp_advanced_tutorial/advance_tutorial.ipynb` [Click Here](https://github.com/fastnlp/fastNLP/tree/master/tutorials/fastnlp_advanced_tutorial/advance_tutorial.ipynb)


### 开发者指南 Developer Guide
- `tutorial_for_developer.md` [Click Here](https://github.com/fastnlp/fastNLP/tree/master/tutorials/tutorial_for_developer.md)
### 详细教程 Tutorial 1
十分钟上手:`tutorial_1.ipynb` [Click Here](https://github.com/fastnlp/fastNLP/tree/master/tutorials/tutorial_1.ipynb)

+ 0
- 370
tutorials/fastNLP_padding_tutorial.ipynb View File

@@ -1,370 +0,0 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/yh/miniconda2/envs/python3/lib/python3.6/site-packages/tqdm/autonotebook/__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)\n",
" \" (e.g. in jupyter console)\", TqdmExperimentalWarning)\n"
]
},
{
"data": {
"text/plain": [
"DataSet({'raw_sent': this is a bad idea . type=str,\n",
"'label': 0 type=int,\n",
"'word_str_lst': ['this', 'is', 'a', 'bad', 'idea', '.'] type=list,\n",
"'words': [4, 2, 5, 6, 7, 3] type=list},\n",
"{'raw_sent': it is great . type=str,\n",
"'label': 1 type=int,\n",
"'word_str_lst': ['it', 'is', 'great', '.'] type=list,\n",
"'words': [8, 2, 9, 3] type=list})"
]
},
"execution_count": 1,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 假设有以下的DataSet, 这里只是为了举例所以只选择了两个sample\n",
"import sys\n",
"import os\n",
"sys.path.append('/Users/yh/Desktop/fastNLP/fastNLP')\n",
"\n",
"from fastNLP import DataSet\n",
"from fastNLP import Instance\n",
"from fastNLP import Vocabulary\n",
"\n",
"dataset = DataSet()\n",
"dataset.append(Instance(raw_sent='This is a bad idea .', label=0))\n",
"dataset.append(Instance(raw_sent='It is great .', label=1))\n",
"\n",
"# 按照fastNLP_10min_tutorial.ipynb的步骤,对数据进行一些处理。这里为了演示padding操作,把field的名称做了一些改变\n",
"dataset.apply(lambda x:x['raw_sent'].lower(), new_field_name='raw_sent')\n",
"dataset.apply(lambda x:x['raw_sent'].split(), new_field_name='word_str_lst')\n",
"\n",
"# 建立Vocabulary\n",
"word_vocab = Vocabulary()\n",
"dataset.apply(lambda x:word_vocab.update(x['word_str_lst']))\n",
"dataset.apply(lambda x:[word_vocab.to_index(word) for word in x['word_str_lst']], new_field_name='words')\n",
"\n",
"# 检查以下是否得到我们想要的结果了\n",
"dataset[:2]"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"batch_x has: {'word_str_lst': array([list(['this', 'is', 'a', 'bad', 'idea', '.']),\n",
" list(['it', 'is', 'great', '.'])], dtype=object), 'words': tensor([[4, 2, 5, 6, 7, 3],\n",
" [8, 2, 9, 3, 0, 0]])}\n",
"batch_y has: {'label': tensor([0, 1])}\n"
]
},
{
"data": {
"text/plain": [
"'\"\\n结果中\\n Batch会对元素类型(元素即最内层的数据,raw_sent为str,word_str_lst为str,words为int, label为int)为int或者float的数据进行默认\\n padding,而非int或float的则不进行padding。但若每个Instance中该field为二维数据,也不进行padding。因为二维数据的padding涉及到\\n 两个维度的padding,不容易自动判断padding的形式。\\n'"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 将field设置为input或者target\n",
"dataset.set_input('word_str_lst')\n",
"dataset.set_input('words')\n",
"dataset.set_target('label')\n",
"\n",
"# 使用Batch取出batch数据\n",
"from fastNLP.core.batch import Batch\n",
"from fastNLP.core.sampler import RandomSampler\n",
"\n",
"batch_iterator = Batch(dataset=dataset, batch_size=2, sampler=RandomSampler())\n",
"for batch_x, batch_y in batch_iterator:\n",
" print(\"batch_x has: \", batch_x)\n",
" print(\"batch_y has: \", batch_y)\n",
"\"\"\"\"\n",
"结果中\n",
" Batch会对元素类型(元素即最内层的数据,raw_sent为str,word_str_lst为str,words为int, label为int)为int或者float的数据进行默认\n",
" padding,而非int或float的则不进行padding。但若每个Instance中该field为二维数据,也不进行padding。因为二维数据的padding涉及到\n",
" 两个维度的padding,不容易自动判断padding的形式。\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"batch_x has: {'word_str_lst': array([list(['it', 'is', 'great', '.']),\n",
" list(['this', 'is', 'a', 'bad', 'idea', '.'])], dtype=object), 'words': tensor([[ 8, 2, 9, 3, -100, -100],\n",
" [ 4, 2, 5, 6, 7, 3]])}\n",
"batch_y has: {'label': tensor([1, 0])}\n"
]
}
],
"source": [
"# 所有的pad_val都默认为0,如果需要修改某一个field的默认pad值,可以通过DataSet.set_pad_val(field_name, pad_val)进行修改\n",
"# 若需要将word的padding修改为-100\n",
"dataset.set_pad_val('words', pad_val=-100)\n",
"batch_iterator = Batch(dataset=dataset, batch_size=2, sampler=RandomSampler())\n",
"for batch_x, batch_y in batch_iterator:\n",
" print(\"batch_x has: \", batch_x)\n",
" print(\"batch_y has: \", batch_y)\n",
"# pad的值修改为-100了"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"DataSet({'raw_sent': this is a bad idea . type=str,\n",
"'label': 0 type=int,\n",
"'word_str_lst': ['this', 'is', 'a', 'bad', 'idea', '.'] type=list,\n",
"'words': [4, 2, 5, 6, 7, 3] type=list,\n",
"'char_str_lst': [['t', 'h', 'i', 's'], ['i', 's'], ['a'], ['b', 'a', 'd'], ['i', 'd', 'e', 'a'], ['.']] type=list,\n",
"'chars': [[4, 9, 2, 5], [2, 5], [3], [10, 3, 6], [2, 6, 7, 3], [8]] type=list},\n",
"{'raw_sent': it is great . type=str,\n",
"'label': 1 type=int,\n",
"'word_str_lst': ['it', 'is', 'great', '.'] type=list,\n",
"'words': [8, 2, 9, 3] type=list,\n",
"'char_str_lst': [['i', 't'], ['i', 's'], ['g', 'r', 'e', 'a', 't'], ['.']] type=list,\n",
"'chars': [[2, 4], [2, 5], [11, 12, 7, 3, 4], [8]] type=list})"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 若需要使用二维padding或指定padding方式,可以通过设置该field的padder实现,下面以英文的character padding为例。在某些场景下,可能想要\n",
"# 使用英文word的character作为特征,character的padding为二维padding,fastNLP默认只会进行一维padding。\n",
"\n",
"dataset.apply(lambda x: [[c for c in word] for word in x['word_str_lst']], new_field_name='char_str_lst')\n",
"char_vocab = Vocabulary()\n",
"dataset.apply(lambda x:[char_vocab.update(chars) for chars in x['char_str_lst']])\n",
"dataset.apply(lambda x:[[char_vocab.to_index(c) for c in chars] for chars in x['char_str_lst']],new_field_name='chars')\n",
"dataset[:2]"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"batch_x has: {'word_str_lst': array([list(['this', 'is', 'a', 'bad', 'idea', '.']),\n",
" list(['it', 'is', 'great', '.'])], dtype=object), 'words': tensor([[ 4, 2, 5, 6, 7, 3],\n",
" [ 8, 2, 9, 3, -100, -100]]), 'chars': array([list([[4, 9, 2, 5], [2, 5], [3], [10, 3, 6], [2, 6, 7, 3], [8]]),\n",
" list([[2, 4], [2, 5], [11, 12, 7, 3, 4], [8]])], dtype=object)}\n",
"batch_y has: {'label': tensor([0, 1])}\n"
]
},
{
"data": {
"text/plain": [
"'\\n 其它field与之前的是相同的。chars因为存在两个维度需要padding,不能自动决定padding方式,所以直接输出了原始形式。\\n'"
]
},
"execution_count": 5,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 如果不针对二维的character指定padding方法\n",
"dataset.set_input('chars')\n",
"batch_iterator = Batch(dataset=dataset, batch_size=2, sampler=RandomSampler())\n",
"for batch_x, batch_y in batch_iterator:\n",
" print(\"batch_x has: \", batch_x)\n",
" print(\"batch_y has: \", batch_y)\n",
" \n",
"\"\"\"\n",
" 其它field与之前的是相同的。chars因为存在两个维度需要padding,不能自动决定padding方式,所以直接输出了原始形式。\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"batch_x has: {'word_str_lst': array([list(['this', 'is', 'a', 'bad', 'idea', '.']),\n",
" list(['it', 'is', 'great', '.'])], dtype=object), 'words': tensor([[ 4, 2, 5, 6, 7, 3],\n",
" [ 8, 2, 9, 3, -100, -100]]), 'chars': tensor([[[ 4, 9, 2, 5],\n",
" [ 2, 5, 0, 0],\n",
" [ 3, 0, 0, 0],\n",
" [10, 3, 6, 0],\n",
" [ 2, 6, 7, 3],\n",
" [ 8, 0, 0, 0]],\n",
"\n",
" [[ 2, 4, 0, 0],\n",
" [ 2, 5, 0, 0],\n",
" [11, 12, 7, 3],\n",
" [ 8, 0, 0, 0],\n",
" [ 0, 0, 0, 0],\n",
" [ 0, 0, 0, 0]]])}\n",
"batch_y has: {'label': tensor([0, 1])}\n"
]
},
{
"data": {
"text/plain": [
"'\\n chars被正确padding了\\n'"
]
},
"execution_count": 6,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 若要使用二维padding,需要手动设置padding方式\n",
"from fastNLP.core.fieldarray import EngChar2DPadder\n",
"dataset.set_padder('chars', EngChar2DPadder())\n",
"batch_iterator = Batch(dataset=dataset, batch_size=2, sampler=RandomSampler())\n",
"for batch_x, batch_y in batch_iterator:\n",
" print(\"batch_x has: \", batch_x)\n",
" print(\"batch_y has: \", batch_y)\n",
" \n",
"\"\"\"\n",
" chars被正确padding了\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"batch_x has: {'raw_sent': ['this is a bad idea .', 'it is great . '], 'word_str_lst': array([list(['this', 'is', 'a', 'bad', 'idea', '.']),\n",
" list(['it', 'is', 'great', '.'])], dtype=object), 'words': tensor([[ 4, 2, 5, 6, 7, 3],\n",
" [ 8, 2, 9, 3, -100, -100]]), 'chars': tensor([[[ 4, 9, 2, 5],\n",
" [ 2, 5, 0, 0],\n",
" [ 3, 0, 0, 0],\n",
" [10, 3, 6, 0],\n",
" [ 2, 6, 7, 3],\n",
" [ 8, 0, 0, 0]],\n",
"\n",
" [[ 2, 4, 0, 0],\n",
" [ 2, 5, 0, 0],\n",
" [11, 12, 7, 3],\n",
" [ 8, 0, 0, 0],\n",
" [ 0, 0, 0, 0],\n",
" [ 0, 0, 0, 0]]])}\n",
"batch_y has: {'label': tensor([0, 1])}\n"
]
},
{
"data": {
"text/plain": [
"'\\n raw_sent正确输出,对应内容也进行了pad。\\n'"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# 如果AutoPad与EngChar2DPadder不能满足需要,可以自己实现Padder对象。这里举一个例子,比如需要把raw_sentence pad到一样长\n",
"from fastNLP.core.fieldarray import PadderBase\n",
"\n",
"class PadStr(PadderBase):\n",
" def __init__(self, pad_val=' '):\n",
" super().__init__(pad_val=pad_val) #让父类管理pad_val的值,这样可以通过DataSet.set_pad_val()修改到该值\n",
" \n",
" def __call__(self, contents, field_name, field_ele_dtype):\n",
" \"\"\"\n",
" 如果以上面的例子举例,在raw_sent这个field进行pad时,传入的\n",
" contents:\n",
" [\n",
" 'This is a bad idea .',\n",
" 'It is great .'\n",
" ]\n",
" field_name: 'raw_sent',当前field的名称,主要用于帮助debug。\n",
" field_ele_dtype: np.str. 这个参数基本都用不上,是该field中内部元素的类型\n",
" \"\"\"\n",
" max_len = max([len(str_) for str_ in contents])\n",
" pad_strs = []\n",
" for content in contents:\n",
" pad_strs.append(content + (max_len-len(content))*self.pad_val)\n",
" return pad_strs\n",
"\n",
"dataset.set_input('raw_sent')\n",
"dataset.set_padder('raw_sent', PadStr())\n",
"batch_iterator = Batch(dataset=dataset, batch_size=2, sampler=RandomSampler())\n",
"for batch_x, batch_y in batch_iterator:\n",
" print(\"batch_x has: \", batch_x)\n",
" print(\"batch_y has: \", batch_y)\n",
"\n",
"\"\"\"\n",
" raw_sent正确输出,对应内容也进行了pad。\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

+ 0
- 751
tutorials/fastnlp_10min_tutorial.ipynb View File

@@ -1,751 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"fastNLP10 分钟上手教程\n",
"-------\n",
"\n",
"fastNLP提供方便的数据预处理,训练和测试模型的功能"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"如果您还没有通过pip安装fastNLP,可以执行下面的操作加载当前模块"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"import sys\n",
"sys.path.append(\"../\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"DataSet & Instance\n",
"------\n",
"\n",
"fastNLP用DataSet和Instance保存和处理数据。每个DataSet表示一个数据集,每个Instance表示一个数据样本。一个DataSet存有多个Instance,每个Instance可以自定义存哪些内容。\n",
"\n",
"有一些read_*方法,可以轻松从文件读取数据,存成DataSet。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"77\n"
]
}
],
"source": [
"from fastNLP import DataSet\n",
"from fastNLP import Instance\n",
"\n",
"# 从csv读取数据到DataSet\n",
"dataset = DataSet.read_csv('sample_data/tutorial_sample_dataset.csv', headers=('raw_sentence', 'label'), sep='\\t')\n",
"print(len(dataset))"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'raw_sentence': A series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
"'label': 1 type=str}\n",
"{'raw_sentence': The plot is romantic comedy boilerplate from start to finish . type=str,\n",
"'label': 2 type=str}\n"
]
}
],
"source": [
"# 使用数字索引[k],获取第k个样本\n",
"print(dataset[0])\n",
"\n",
"# 索引也可以是负数\n",
"print(dataset[-3])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Instance\n",
"Instance表示一个样本,由一个或多个field(域,属性,特征)组成,每个field有名字和值。\n",
"\n",
"在初始化Instance时即可定义它包含的域,使用 \"field_name=field_value\"的写法。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'raw_sentence': fake data type=str,\n",
"'label': 0 type=str}"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# DataSet.append(Instance)加入新数据\n",
"dataset.append(Instance(raw_sentence='fake data', label='0'))\n",
"dataset[-1]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## DataSet.apply方法\n",
"数据预处理利器"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'raw_sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
"'label': 1 type=str}\n"
]
}
],
"source": [
"# 将所有数字转为小写\n",
"dataset.apply(lambda x: x['raw_sentence'].lower(), new_field_name='raw_sentence')\n",
"print(dataset[0])"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'raw_sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
"'label': 1 type=int}\n"
]
}
],
"source": [
"# label转int\n",
"dataset.apply(lambda x: int(x['label']), new_field_name='label')\n",
"print(dataset[0])"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'raw_sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
"'label': 1 type=int,\n",
"'words': ['a', 'series', 'of', 'escapades', 'demonstrating', 'the', 'adage', 'that', 'what', 'is', 'good', 'for', 'the', 'goose', 'is', 'also', 'good', 'for', 'the', 'gander', ',', 'some', 'of', 'which', 'occasionally', 'amuses', 'but', 'none', 'of', 'which', 'amounts', 'to', 'much', 'of', 'a', 'story', '.'] type=list}\n"
]
}
],
"source": [
"# 使用空格分割句子\n",
"def split_sent(ins):\n",
" return ins['raw_sentence'].split()\n",
"dataset.apply(split_sent, new_field_name='words')\n",
"print(dataset[0])"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'raw_sentence': a series of escapades demonstrating the adage that what is good for the goose is also good for the gander , some of which occasionally amuses but none of which amounts to much of a story . type=str,\n",
"'label': 1 type=int,\n",
"'words': ['a', 'series', 'of', 'escapades', 'demonstrating', 'the', 'adage', 'that', 'what', 'is', 'good', 'for', 'the', 'goose', 'is', 'also', 'good', 'for', 'the', 'gander', ',', 'some', 'of', 'which', 'occasionally', 'amuses', 'but', 'none', 'of', 'which', 'amounts', 'to', 'much', 'of', 'a', 'story', '.'] type=list,\n",
"'seq_len': 37 type=int}\n"
]
}
],
"source": [
"# 增加长度信息\n",
"dataset.apply(lambda x: len(x['words']), new_field_name='seq_len')\n",
"print(dataset[0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## DataSet.drop\n",
"筛选数据"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"77\n"
]
}
],
"source": [
"# 删除低于某个长度的词语\n",
"dataset.drop(lambda x: x['seq_len'] <= 3)\n",
"print(len(dataset))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 配置DataSet\n",
"1. 哪些域是特征,哪些域是标签\n",
"2. 切分训练集/验证集"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"# 设置DataSet中,哪些field要转为tensor\n",
"\n",
"# set target,loss或evaluate中的golden,计算loss,模型评估时使用\n",
"dataset.set_target(\"label\")\n",
"# set input,模型forward时使用\n",
"dataset.set_input(\"words\")"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"54\n",
"23\n"
]
}
],
"source": [
"# 分出测试集、训练集\n",
"\n",
"test_data, train_data = dataset.split(0.3)\n",
"print(len(test_data))\n",
"print(len(train_data))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Vocabulary\n",
"------\n",
"\n",
"fastNLP中的Vocabulary轻松构建词表,将词转成数字"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'raw_sentence': the performances are an absolute joy . type=str,\n",
"'label': 4 type=int,\n",
"'words': [3, 1, 1, 26, 1, 1, 2] type=list,\n",
"'seq_len': 7 type=int}\n"
]
}
],
"source": [
"from fastNLP import Vocabulary\n",
"\n",
"# 构建词表, Vocabulary.add(word)\n",
"vocab = Vocabulary(min_freq=2)\n",
"train_data.apply(lambda x: [vocab.add(word) for word in x['words']])\n",
"vocab.build_vocab()\n",
"\n",
"# index句子, Vocabulary.to_index(word)\n",
"train_data.apply(lambda x: [vocab.to_index(word) for word in x['words']], new_field_name='words')\n",
"test_data.apply(lambda x: [vocab.to_index(word) for word in x['words']], new_field_name='words')\n",
"\n",
"\n",
"print(test_data[0])"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"batch_x has: {'words': tensor([[ 15, 72, 15, 73, 74, 7, 3, 75, 6, 3, 16, 16,\n",
" 76, 2],\n",
" [ 15, 72, 15, 73, 74, 7, 3, 75, 6, 3, 16, 16,\n",
" 76, 2]])}\n",
"batch_y has: {'label': tensor([ 1, 1])}\n"
]
}
],
"source": [
"# 如果你们需要做强化学习或者GAN之类的项目,你们也可以使用这些数据预处理的工具\n",
"from fastNLP.core.batch import Batch\n",
"from fastNLP.core.sampler import RandomSampler\n",
"\n",
"batch_iterator = Batch(dataset=train_data, batch_size=2, sampler=RandomSampler())\n",
"for batch_x, batch_y in batch_iterator:\n",
" print(\"batch_x has: \", batch_x)\n",
" print(\"batch_y has: \", batch_y)\n",
" break"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Model\n",
"定义一个PyTorch模型"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"CNNText(\n",
" (embed): Embedding(\n",
" 77, 50\n",
" (dropout): Dropout(p=0.0)\n",
" )\n",
" (conv_pool): ConvMaxpool(\n",
" (convs): ModuleList(\n",
" (0): Conv1d(50, 3, kernel_size=(3,), stride=(1,), padding=(2,))\n",
" (1): Conv1d(50, 4, kernel_size=(4,), stride=(1,), padding=(2,))\n",
" (2): Conv1d(50, 5, kernel_size=(5,), stride=(1,), padding=(2,))\n",
" )\n",
" )\n",
" (dropout): Dropout(p=0.1)\n",
" (fc): Linear(\n",
" (linear): Linear(in_features=12, out_features=5, bias=True)\n",
" )\n",
")"
]
},
"execution_count": 15,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from fastNLP.models import CNNText\n",
"model = CNNText((len(vocab), 50), num_classes=5, padding=2, dropout=0.1)\n",
"model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这是上述模型的forward方法。如果你不知道什么是forward方法,请参考我们的PyTorch教程。\n",
"\n",
"注意两点:\n",
"1. forward参数名字叫**word_seq**,请记住。\n",
"2. forward的返回值是一个**dict**,其中有个key的名字叫**output**。\n",
"\n",
"```Python\n",
" def forward(self, word_seq):\n",
" \"\"\"\n",
"\n",
" :param word_seq: torch.LongTensor, [batch_size, seq_len]\n",
" :return output: dict of torch.LongTensor, [batch_size, num_classes]\n",
" \"\"\"\n",
" x = self.embed(word_seq) # [N,L] -> [N,L,C]\n",
" x = self.conv_pool(x) # [N,L,C] -> [N,C]\n",
" x = self.dropout(x)\n",
" x = self.fc(x) # [N,C] -> [N, N_class]\n",
" return {'output': x}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这是上述模型的predict方法,是用来直接输出该任务的预测结果,与forward目的不同。\n",
"\n",
"注意两点:\n",
"1. predict参数名也叫**word_seq**。\n",
"2. predict的返回值是也一个**dict**,其中有个key的名字叫**predict**。\n",
"\n",
"```\n",
" def predict(self, word_seq):\n",
" \"\"\"\n",
"\n",
" :param word_seq: torch.LongTensor, [batch_size, seq_len]\n",
" :return predict: dict of torch.LongTensor, [batch_size, seq_len]\n",
" \"\"\"\n",
" output = self(word_seq)\n",
" _, predict = output['output'].max(dim=1)\n",
" return {'predict': predict}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Trainer & Tester\n",
"------\n",
"\n",
"使用fastNLP的Trainer训练模型"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [],
"source": [
"from fastNLP import Trainer\n",
"from copy import deepcopy\n",
"from fastNLP.core.losses import CrossEntropyLoss\n",
"from fastNLP.core.metrics import AccuracyMetric\n",
"\n",
"\n",
"# 更改DataSet中对应field的名称,与模型的forward的参数名一致\n",
"# 因为forward的参数叫word_seq, 所以要把原本叫words的field改名为word_seq\n",
"# 这里的演示是让你了解这种**命名规则**\n",
"train_data.rename_field('words', 'word_seq')\n",
"test_data.rename_field('words', 'word_seq')\n",
"\n",
"# 顺便把label换名为label_seq\n",
"train_data.rename_field('label', 'label_seq')\n",
"test_data.rename_field('label', 'label_seq')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### loss\n",
"训练模型需要提供一个损失函数\n",
"\n",
"下面提供了一个在分类问题中常用的交叉熵损失。注意它的**初始化参数**。\n",
"\n",
"pred参数对应的是模型的forward返回的dict的一个key的名字,这里是\"output\"。\n",
"\n",
"target参数对应的是dataset作为标签的field的名字,这里是\"label_seq\"。"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [],
"source": [
"loss = CrossEntropyLoss(pred=\"output\", target=\"label_seq\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Metric\n",
"定义评价指标\n",
"\n",
"这里使用准确率。参数的“命名规则”跟上面类似。\n",
"\n",
"pred参数对应的是模型的predict方法返回的dict的一个key的名字,这里是\"predict\"。\n",
"\n",
"target参数对应的是dataset作为标签的field的名字,这里是\"label_seq\"。"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [],
"source": [
"metric = AccuracyMetric(pred=\"predict\", target=\"label_seq\")"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input fields after batch(if batch size is 2):\n",
"\tword_seq: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 11]) \n",
"target fields after batch(if batch size is 2):\n",
"\tlabel_seq: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
"\n"
]
},
{
"ename": "NameError",
"evalue": "\nProblems occurred when calling CNNText.forward(self, words, seq_len=None)\n\tmissing param: ['words']\n\tunused field: ['word_seq']\n\tSuggestion: You need to provide ['words'] in DataSet and set it as input. ",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-19-ff7d68caf88a>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0msave_path\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0mbatch_size\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m32\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 9\u001b[0;31m n_epochs=5)\n\u001b[0m\u001b[1;32m 10\u001b[0m \u001b[0moverfit_trainer\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/Users/fdujyn/anaconda3/lib/python3.6/site-packages/fastNLP/core/trainer.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, train_data, model, optimizer, loss, batch_size, sampler, update_every, n_epochs, print_every, dev_data, metrics, metric_key, validate_every, save_path, prefetch, use_tqdm, device, callbacks, check_code_level)\u001b[0m\n\u001b[1;32m 447\u001b[0m _check_code(dataset=train_data, model=model, losser=losser, metrics=metrics, dev_data=dev_data,\n\u001b[1;32m 448\u001b[0m \u001b[0mmetric_key\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mmetric_key\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcheck_level\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mcheck_code_level\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 449\u001b[0;31m batch_size=min(batch_size, DEFAULT_CHECK_BATCH_SIZE))\n\u001b[0m\u001b[1;32m 450\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 451\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrain_data\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtrain_data\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/Users/fdujyn/anaconda3/lib/python3.6/site-packages/fastNLP/core/trainer.py\u001b[0m in \u001b[0;36m_check_code\u001b[0;34m(dataset, model, losser, metrics, batch_size, dev_data, metric_key, check_level)\u001b[0m\n\u001b[1;32m 808\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minfo_str\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 809\u001b[0m _check_forward_error(forward_func=model.forward, dataset=dataset,\n\u001b[0;32m--> 810\u001b[0;31m batch_x=batch_x, check_level=check_level)\n\u001b[0m\u001b[1;32m 811\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 812\u001b[0m \u001b[0mrefined_batch_x\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_build_args\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmodel\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mforward\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mbatch_x\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/Users/fdujyn/anaconda3/lib/python3.6/site-packages/fastNLP/core/utils.py\u001b[0m in \u001b[0;36m_check_forward_error\u001b[0;34m(forward_func, batch_x, dataset, check_level)\u001b[0m\n\u001b[1;32m 594\u001b[0m \u001b[0msugg_str\u001b[0m \u001b[0;34m+=\u001b[0m \u001b[0msuggestions\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 595\u001b[0m \u001b[0merr_str\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'\\n'\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0;34m'\\n'\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mjoin\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0merrs\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0;34m'\\n\\tSuggestion: '\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0msugg_str\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 596\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mNameError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0merr_str\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 597\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0m_unused\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 598\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mcheck_level\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0mWARNING_CHECK_LEVEL\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: \nProblems occurred when calling CNNText.forward(self, words, seq_len=None)\n\tmissing param: ['words']\n\tunused field: ['word_seq']\n\tSuggestion: You need to provide ['words'] in DataSet and set it as input. "
]
}
],
"source": [
"# 实例化Trainer,传入模型和数据,进行训练\n",
"# 先在test_data拟合(确保模型的实现是正确的)\n",
"copy_model = deepcopy(model)\n",
"overfit_trainer = Trainer(model=copy_model, train_data=test_data, dev_data=test_data,\n",
" loss=loss,\n",
" metrics=metric,\n",
" save_path=None,\n",
" batch_size=32,\n",
" n_epochs=5)\n",
"overfit_trainer.train()"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"input fields after batch(if batch size is 2):\n",
"\tword_seq: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2, 20]) \n",
"target fields after batch(if batch size is 2):\n",
"\tlabel_seq: (1)type:torch.Tensor (2)dtype:torch.int64, (3)shape:torch.Size([2]) \n",
"\n",
"training epochs started 2019-01-12 17-09-05\n"
]
},
{
"data": {
"text/plain": [
"HBox(children=(IntProgress(value=0, layout=Layout(flex='2'), max=5), HTML(value='')), layout=Layout(display='i…"
]
},
"metadata": {},
"output_type": "display_data"
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Evaluation at Epoch 1/5. Step:1/5. AccuracyMetric: acc=0.37037\n",
"Evaluation at Epoch 2/5. Step:2/5. AccuracyMetric: acc=0.37037\n",
"Evaluation at Epoch 3/5. Step:3/5. AccuracyMetric: acc=0.462963\n",
"Evaluation at Epoch 4/5. Step:4/5. AccuracyMetric: acc=0.425926\n",
"Evaluation at Epoch 5/5. Step:5/5. AccuracyMetric: acc=0.481481\n",
"\n",
"In Epoch:5/Step:5, got best dev performance:AccuracyMetric: acc=0.481481\n",
"Reloaded the best model.\n",
"Train finished!\n"
]
}
],
"source": [
"# 用train_data训练,在test_data验证\n",
"trainer = Trainer(model=model, train_data=train_data, dev_data=test_data,\n",
" loss=CrossEntropyLoss(pred=\"output\", target=\"label_seq\"),\n",
" metrics=AccuracyMetric(pred=\"predict\", target=\"label_seq\"),\n",
" save_path=None,\n",
" batch_size=32,\n",
" n_epochs=5)\n",
"trainer.train()\n",
"print('Train finished!')"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[tester] \n",
"AccuracyMetric: acc=0.481481\n",
"{'AccuracyMetric': {'acc': 0.481481}}\n"
]
}
],
"source": [
"# 调用Tester在test_data上评价效果\n",
"from fastNLP import Tester\n",
"\n",
"tester = Tester(data=test_data, model=model, metrics=AccuracyMetric(pred=\"predict\", target=\"label_seq\"),\n",
" batch_size=4)\n",
"acc = tester.test()\n",
"print(acc)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# In summary\n",
"\n",
"## fastNLP Trainer的伪代码逻辑\n",
"### 1. 准备DataSet,假设DataSet中共有如下的fields\n",
" ['raw_sentence', 'word_seq1', 'word_seq2', 'raw_label','label']\n",
" 通过\n",
" DataSet.set_input('word_seq1', word_seq2', flag=True)将'word_seq1', 'word_seq2'设置为input\n",
" 通过\n",
" DataSet.set_target('label', flag=True)将'label'设置为target\n",
"### 2. 初始化模型\n",
" class Model(nn.Module):\n",
" def __init__(self):\n",
" xxx\n",
" def forward(self, word_seq1, word_seq2):\n",
" # (1) 这里使用的形参名必须和DataSet中的input field的名称对应。因为我们是通过形参名, 进行赋值的\n",
" # (2) input field的数量可以多于这里的形参数量。但是不能少于。\n",
" xxxx\n",
" # 输出必须是一个dict\n",
"### 3. Trainer的训练过程\n",
" (1) 从DataSet中按照batch_size取出一个batch,调用Model.forward\n",
" (2) 将 Model.forward的结果 与 标记为target的field 传入Losser当中。\n",
" 由于每个人写的Model.forward的output的dict可能key并不一样,比如有人是{'pred':xxx}, {'output': xxx}; \n",
" 另外每个人将target可能也会设置为不同的名称, 比如有人是label, 有人设置为target;\n",
" 为了解决以上的问题,我们的loss提供映射机制\n",
" 比如CrossEntropyLosser的需要的输入是(prediction, target)。但是forward的output是{'output': xxx}; 'label'是target\n",
" 那么初始化losser的时候写为CrossEntropyLosser(prediction='output', target='label')即可\n",
" (3) 对于Metric是同理的\n",
" Metric计算也是从 forward的结果中取值 与 设置target的field中取值。 也是可以通过映射找到对应的值 \n",
" \n",
" \n",
"\n",
"## 一些问题.\n",
"### 1. DataSet中为什么需要设置input和target\n",
" 只有被设置为input或者target的数据才会在train的过程中被取出来\n",
" (1.1) 我们只会在设置为input的field中寻找传递给Model.forward的参数。\n",
" (1.2) 我们在传递值给losser或者metric的时候会使用来自: \n",
" (a)Model.forward的output\n",
" (b)被设置为target的field\n",
" \n",
"\n",
"### 2. 我们是通过forwad中的形参名将DataSet中的field赋值给对应的参数\n",
" (1.1) 构建模型过程中,\n",
" 例如:\n",
" DataSet中x,seq_lens是input,那么forward就应该是\n",
" def forward(self, x, seq_lens):\n",
" pass\n",
" 我们是通过形参名称进行匹配的field的\n",
" \n",
"\n",
"\n",
"### 1. 加载数据到DataSet\n",
"### 2. 使用apply操作对DataSet进行预处理\n",
" (2.1) 处理过程中将某些field设置为input,某些field设置为target\n",
"### 3. 构建模型\n",
" (3.1) 构建模型过程中,需要注意forward函数的形参名需要和DataSet中设置为input的field名称是一致的。\n",
" 例如:\n",
" DataSet中x,seq_lens是input,那么forward就应该是\n",
" def forward(self, x, seq_lens):\n",
" pass\n",
" 我们是通过形参名称进行匹配的field的\n",
" (3.2) 模型的forward的output需要是dict类型的。\n",
" 建议将输出设置为{\"pred\": xx}.\n",
" \n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

+ 0
- 97
tutorials/fastnlp_test_tutorial.ipynb View File

@@ -1,97 +0,0 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## fastNLP测试说明\n",
"### 测试环境\n",
"fastNLP使用pytest对代码进行单元测试,测试代码在test文件夹下,测试所需数据在test/data_for_tests文件夹下\n",
"测试的步骤主要分为准备数据,执行测试,比对结果,清除环境四步\n",
"测试代码以test_xxx.py命名,以DataSet的测试代码为例,测试代码文件名为test_dataset.py"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"import unittest # 单元测试需要用到unittest\n",
"\n",
"from fastNLP.core.dataset import DataSet\n",
"from fastNLP.core.fieldarray import FieldArray\n",
"from fastNLP.core.instance import Instance\n",
"# 在这个单元测试文件中,需要测试DataSet、FieldArray、以及Instance"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"class TestDataSet(unittest.TestCase): # 类名字以Test打头,继承unittest.TestCase\n",
"\n",
" def test_init_v1(self): # 测试样例1, 函数名称以test_打头\n",
" # 该测试样例测试的是DataSet的初始化\n",
" ins = Instance(x=[1, 2, 3, 4], y=[5, 6]) # 准备数据\n",
" ds = DataSet([ins] * 40) # 执行测试(调用DataSet的初始化函数)\n",
" self.assertTrue(\"x\" in ds.field_arrays and \"y\" in ds.field_arrays) # 比对结果:'x'跟'y'都是ds的field\n",
" self.assertEqual(ds.field_arrays[\"x\"].content, [[1, 2, 3, 4], ] * 40) # 比对结果: field 'x'的内容正确\n",
" self.assertEqual(ds.field_arrays[\"y\"].content, [[5, 6], ] * 40) # 比对结果: field 'y'的内容正确\n",
" \n",
" def test_init_v2(self): # 测试样例2,该样例测试DataSet的另一种初始化方式\n",
" ds = DataSet({\"x\": [[1, 2, 3, 4]] * 40, \"y\": [[5, 6]] * 40})\n",
" self.assertTrue(\"x\" in ds.field_arrays and \"y\" in ds.field_arrays)\n",
" self.assertEqual(ds.field_arrays[\"x\"].content, [[1, 2, 3, 4], ] * 40)\n",
" self.assertEqual(ds.field_arrays[\"y\"].content, [[5, 6], ] * 40)\n",
" \n",
" def test_init_assert(self): # 测试样例3,该样例测试不规范初始化DataSet时是否会报正确错误\n",
" with self.assertRaises(AssertionError):\n",
" _ = DataSet({\"x\": [[1, 2, 3, 4]] * 40, \"y\": [[5, 6]] * 100})\n",
" with self.assertRaises(AssertionError):\n",
" _ = DataSet([[1, 2, 3, 4]] * 10)\n",
" with self.assertRaises(ValueError):\n",
" _ = DataSet(0.00001)\n",
" \n",
" def test_contains(self): # 测试样例4,该样例测试DataSet的contains函数,是功能测试\n",
" ds = DataSet({\"x\": [[1, 2, 3, 4]] * 40, \"y\": [[5, 6]] * 40})\n",
" self.assertTrue(\"x\" in ds)\n",
" self.assertTrue(\"y\" in ds)\n",
" self.assertFalse(\"z\" in ds)\n",
" \n",
" # 更多测试样例见test/core/test_dataset.py"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
}
},
"nbformat": 4,
"nbformat_minor": 2
}

tutorials/tutorial_one.ipynb → tutorials/tutorial_1.ipynb View File


+ 0
- 283
tutorials/tutorial_for_developer.md View File

@@ -1,283 +0,0 @@
# fastNLP开发者指南
#### 本教程涉及以下类:
- DataSet
- Sampler
- Batch
- Model
- Loss
- Metric
- Trainer
- Tester

#### DataSet: 用于承载数据。
1. DataSet里面每个元素只能是以下的三类`np.float64`, `np.int64`, `np.str`。如果传入的数据是`int`则被转换为`np.int64`, `float`被转为`np.float64`。
2. DataSet可以将field设置为input或者target。其中被设置为input的field会被传递给Model.forward, 这个过程中我们是通过键匹配完成传递的。举例来说,假设DataSet中有'x1', 'x2', 'x3'被设置为了input,而
- 函数是Model.forward(self, x1, x3), 那么DataSet中'x1', 'x3'会被传递给forward函数。多余的'x2'会被忽略
- 函数是Model.forward(self, x1, x4), 这里多需要了一个'x4', 但是DataSet的input field中没有这个field,会报错。
- 函数是Model.forward(self, x1, **kwargs), 会把'x1', 'x2', 'x3'都传入。但如果是Model.forward(self, x4, **kwargs)就会发生报错,因为没有'x4'。
3. 对于设置为target的field的名称,我们建议取名为'target'(如果只有一个需要predict的值),但是不强制。后面会讲为什么target可以不强制。
DataSet应该是不需要单独再开发的,如果有不能满足的场景,请在开发群提出或者github提交issue。

#### Sampler: 给定一个DataSet,返回一个序号的list,Batch按照这个list输出数据。
Sampler需要继承fastNLP.core.sampler.BaseSampler
```python
class BaseSampler(object):
"""The base class of all samplers.

Sub-classes must implement the __call__ method.
__call__ takes a DataSet object and returns a list of int - the sampling indices.
"""
def __call__(self, *args, **kwargs):
raise NotImplementedError
# 子类需要复写__call__方法。这个函数只能有一个必选参数, 且必须是DataSet类别, 否则Trainer没法调
class SonSampler(BaseSampler):
def __init__(self, xxx):
# 可以实现init也不可以不实现。
pass
def __call__(self, data_set):
pass
```

#### Batch: 将DataSet中设置为input和target的field取出来构成batch_x, batch_y
并且根据情况(主要根据数据类型能不能转为Tensor)将数据转换为pytorch的Tensor。batch中sample的取出顺序是由Sampler决定的。
Sampler是传入一个DataSet,返回一个与DataSet等长的序号list,Batch一次会取出batch_size个sample(最后一个batch可能数量不足batch_size个)。
举例:
1. SequentialSampler是顺序采样

假设传入的DataSet长度是100, SequentialSampler返回的序号list就是[0, 1, ...,98, 99]. batch_size如果被设置为4,那么第一个batch所获取的instance就是[0, 1, 2, 3]这四个instance. 第二个batch所获取instace就是[4, 5, 6, 7], ...直到采完所有的sample。
2. RandomSampler是随机采样

假设传入的DataSet长度是100, RandomSampler返回的序号list可能是[0, 99, 20, 5, 3, 1, ...]. 依次按照batch_size的大小取出sample。

Batch应该不需要继承与开发,如果你有特殊需求请在开发群里提出。

#### Model:用户自定的Model
必须是nn.Module的子类
1. 必须实现forward方法,并且forward方法不能出现*arg这种参数. 例如
```python
def forward(self, word_seq, *args): #这是不允许的.
# ...
pass
```
返回值必须是dict的
```python
def forward(self, word_seq, seq_lens):
xxx = "xxx"
return {'pred': xxx} #return的值必须是dict的。里面的预测的key推荐使用pred,但是不做强制限制。输出元素数目不限。
```
2. 如果实现了predict方法,在做evaluation的时候将调用predict方法而不是forward。如果没有predict方法,则在evaluation时调用forward方法。predict方法也不能使用*args这种参数形式,同时结果也必须返回一个dict,同样推荐key为'pred'。

#### Loss: 根据model.forward()返回的prediction(是一个dict)和batch_y计算相应的loss
1. 先介绍"键映射"。 如在DataSet, Model一节所看见的那样,fastNLP并不限制Model.forward()的返回值,也不限制DataSet中target field的key。计算的loss的时候,怎么才能知道从哪里取值呢?
这里以CrossEntropyLoss为例,一般情况下, 计算CrossEntropy需要prediction和target两个值。而在CrossEntropyLoss初始化时可以传入两个参数(pred=None, target=None), 这两个参数接受的类型是str,假设(pred='output', target='label'),那么CrossEntropyLoss会使用'output'这个key在forward的output与batch_y中寻找值;'label'也是在forward的output与batch_y中寻找值。注意这里pred或target的来源并不一定非要来自于model.forward与batch_y,也可以只来自于forward的结果。
2. 如何创建一个自己的loss
- 使用fastNLP.LossInForward, 在model.forward()的结果中包含一个为loss的key。
- trainer中使用loss(假设loss=CrossEntropyLoss())的时候其实是
los = loss(prediction, batch_y),即直接调用的是`loss.__call__()`方法,但是CrossEntropyLoss里面并没有自己实现`__call__`方法,这是因为`__call__`在LossBase中实现了。所有的loss必须继承fastNLP.core.loss.LossBase, 下面先说一下LossBase的几个方法,见下一节。
3. 尽量不要复写`__call__()`, `_init_param_map()`方法。

```python
class LossBase():
def __init__(self):
self.param_map = {} # 一般情况下也不需要自己创建。调用_init_param_map()更好
self._checked = False # 这个参数可以忽略

def _init_param_map(self, key_map=None, **kwargs):
# 这个函数是用于注册Loss的“键映射”,有两种传值方法,
# 第一种是通过key_map传入dict,取值是用value到forward和batch_y取
# key_map = {'pred': 'output', 'target': 'label'}
# 第二种是自己写
# _init_param_map(pred='output', target='label')
# 为什么会提供这么一个方法?通过调用这个方法会自动注册param_map,并会做一些检查,防止出现传入的key其实并不是get_loss
# 的一个参数。注意传入这个方法的参数必须都是需要做键映射的内容,其它loss参数不要传入。如果传入(pred=None, target=None)
# 则__call__()会到pred_dict与target_dict去寻找key为'pred'和'target'的值。
# 但这个参数不是必须要调用的。

def __call__(self, pred_dict, target_dict, check=False): # check=False忽略这个参数,之后应该会被删除的
# 这个函数主要会做一些check的工作,比如pred_dict与target_dict中是否包含了计算loss所必须的key等。检查通过,则调用get_loss
# 方法。
fast_param = self._fast_param_map(predict_dict, target_dict):
if fast_param:
return self.get_loss(**fast_param)
# 如果没有fast_param则通过匹配参数然后调用get_loss完成
xxxx
return loss # 返回为Tensor的loss
def _fast_param_map(self, pred_dict, target_dict):
# 这是一种快速计算loss的机制,因为在很多情况下其实都不需要通过"键映射",比如计算loss时,pred_dict只有一个元素,
# target_dict也只有一个元素,那么无歧义地就可以把预测值与实际值用于计算loss, 基类判断了这种情况(可能还有其它无歧义的情况)。
# 即_fast_param_map成功的话,就不需要使用键映射,这样即使在没有传递或者传递错误"键映射"的情况也可以直接计算loss。
# 返回值是一个dict, 如果匹配成功,应该返回类似{'pred':value, 'target': value}的结果;如果dict为空则说明匹配失败,
# __call__方法会继续执行。

def get_loss(self, *args, **kwargs):
# 这个是一定需要实现的,计算loss的地方。
# (1) get_loss中一定不能包含*arg这种参数形式。
# (2) 如果包含**kwargs这种参数,这会将pred_dict与target_dict中所有参数传入。但是建议不要用这个参数
raise NotImplementedError

# 下面使用L1Loss举例
class L1Loss(LossBase): # 继承LossBase
# 初始化需要映射的值,这里需要映射的值'pred', 'target'必须与get_loss需要参数名是对应的
def __init__(self, pred=None, target=None):
super(L1Loss, self).__init__()
# 这里传入_init_param_map以使得pred和target被正确注册,但这一步不是必须的, 建议调用。传入_init_param_map的是用于
# “键映射"的键值对。假设初始化__init__(pred=None, target=None, threshold=0.1)中threshold是用于控制loss计算的,则
# 不要将threshold传入_init_param_map.
self._init_param_map(pred=pred, target=target)

def get_loss(self, pred, target):
# 这里'pred', 'target'必须和初始化的映射是一致的。
return F.l1_loss(input=pred, target=target) #直接返回一个loss即可
```

### Metric: 根据Model.forward()或者Model.predict()的结果计算metric
metric的设计和loss的设计类似。都是传入pred_dict与target_dict进行计算。但是metric的pred_dict来源可能是Model.forward的返回值, 也可能是Model.predict(如果Model具有predict方法则会调用predict方法)的返回值,下面统一用pred_dict代替。
1. 这里的"键映射"与loss的"键映射"是类似的。举例来说,若Metric(pred='output', target='label'),则使用'output'到pred_dict和target_dict中寻找pred, 用'label'寻找target。
2. 如何创建一个自己的Metric方法
Metric与loss的计算不同在于,Metric的计算有两个步骤。
- **每个batch的输出**都会调用Metric的``__call__(pred_dict, target_dict)``方法,而``__call__``方法会调用evaluate()(需要实现)方法。
- 在所有batch传入之后,调用Metric的get_metric()方法得到最终的metric值。
- 所以Metric在调用evaluate方法时,根据拿到的数据: pred_dict与batch_y, 改变自己的状态(比如累加正确的次数,总的sample数等)。在调用get_metric()的时候给出一个最终计算结果。
所有的Metric必须继承自fastNLP.core.metrics.MetricBase. 例子见下一个cell
3. 尽量不要复写``__call__()``,``_init_param_map()``方法。

```python
class MetricBase:
def __init__(self):
self.param_map = {} # 一般情况下也不需要自己创建。调用_init_param_map()更好
self._checked = False # 这个参数可以忽略

def _init_param_map(self, key_map=None, **kwargs):
# 这个函数是用于注册Metric的“键映射”,有两种传值方法,
# 第一种是通过key_map传入dict,取值是用value到forward和batch_y取
# key_map = {'pred': 'output', 'target': 'label'}
# 第二种是自己写(建议使用改种方式)
# _init_param_map(pred='output', target='label')
# 为什么会提供这么一个方法?通过调用这个方法会自动注册param_map,并会做一些检查,防止出现传入的key其实并不是evaluate()
# 的一个参数。注意传入这个方法的参数必须都是需要做键映射的内容,其它evaluate参数不要传入。如果传入(pred=None, target=None)
# 则__call__()会到pred_dict与target_dict去寻找key为'pred'和'target'的值。
# 但这个参数不是必须要调用的。
pass

def __call__(self, pred_dict, target_dict, check=False): # check=False忽略这个参数,之后应该会被删除的
# 这个函数主要会做一些check的工作,比如pred_dict与target_dict中是否包含了计算evaluate所必须的key等。检查通过,则调用
# evaluate方法。
fast_param = self._fast_param_map(predict_dict, target_dict):
if fast_param:
return self.evaluate(**fast_param)
# 如果没有fast_param则通过匹配参数然后调用get_loss完成
# xxxx

def _fast_param_map(self, pred_dict, target_dict):
# 这是一种快速计算loss的机制,因为在很多情况下其实都不需要通过"键映射",比如evaluate时,pred_dict只有一个元素,
# target_dict也只有一个元素,那么无歧义地就可以把预测值与实际值用于计算metric, 基类判断了这种情况(可能还有其它无歧义的
# 情况)。即_fast_param_map成功的话,就不需要使用键映射,这样即使在没有传递或者传递错误"键映射"的情况也可以直接计算metric。
# 返回值是一个dict, 如果匹配成功,应该返回类似{'pred':value, 'target': value}的结果;如果dict为空则说明匹配失败,
# __call__方法会继续尝试匹配。
pass

def evaluate(self, *args, **kwargs):
# 这个是一定需要实现的,累加metric状态
# (1) evaluate()中一定不能包含*arg这种参数形式。
# (2) 如果包含**kwargs这种参数,这会将pred_dict与target_dict中所有参数传入。但是建议不要用这个参数
raise NotImplementedError

def get_metric(self, reset=True):
# 这是一定需要实现的,获取最终的metric。返回值必须是一个dict。会在所有batch传入之后调用
raise NotImplementedError

# 下面使用AccuracyMetric举例
class AccuracyMetric(MetricBase): # MetricBase
# 初始化需要映射的值,这里需要映射的值'pred', 'target'必须与evaluate()需要参数名是对应的
def __init__(self, pred=None, target=None):
super(AccuracyMetric, self).__init__()
# 这里传入_init_param_map以使得pred和target被正确注册,但这一步不是必须的, 建议调用。传入_init_param_map的是用于
# “键映射"的键值对。假设初始化__init__(pred=None, target=None, threshold=0.1)中threshold是用于控制loss计算的,则
# 不要将threshold传入_init_param_map.
self._init_param_map(pred=pred, target=target)

self.total = 0 # 用于累加一共有多少sample
self.corr = 0 # 用于累加一共有多少正确的sample

def evaluate(self, pred, target):
# 对pred和target做一些基本的判断或者预处理等
if pred.size()==target.size() and len(pred.size())=1: #如果pred已经做了argmax
pass
elif len(pred.size())==2 and len(target.size())==1: # pred还没有进行argmax
pred = pred.argmax(dim=1)
else:
raise ValueError("The shape of pred and target should be ((B, n_classes), (B, )) or ("
"(B,),(B,)).")
assert pred.size(0)==target.size(0), "Mismatch batch size."
# 进行相应的累加
self.total += pred.size(0)
self.corr += torch.sum(torch.eq(pred, target).float()).item()

def get_metric(self, reset=True):
# reset用于指示是否清空累加信息。默认为True
# 这个函数需要返回dict,可以包含多个metric。
metric = {}
metric['acc'] = self.corr/self.total
if reset:
self.total = 0
self.corr = 0
return metric
```

#### Tester: 用于做evaluation,应该不需要更改
重要的初始化参数有data, model, metric;比较重要的function是test()。

test中的运行过程
```
predict_func = 如果有model.predict则为model.predict, 否则是model.forward
for batch_x, batch_y in batch:
# (1) 同步数据与model
# (2) 根据predict_func的参数从batch_x中取出数据传入到predict_func中,得到结果pred_dict
# (3) 调用metric(pred_dict, batch_y
# (4) 当所有batch都运行完毕,会调用metric的get_metric方法,并且以返回的值作为evaluation的结果
metric.get_metric()
```

#### Trainer: 对训练过程的封装。
里面比较重要的function是train()
train()中的运行过程
```
(1) 创建batch
batch = Batch(dataset, batch_size, sampler=sampler)
for batch_x, batch_y in batch:
# ...
batch_x,batch_y都是dict。batch_x是DataSet中被设置为input的field;batch_y是DataSet中被设置为target的field。
两个dict中的key就是DataSet中的key,value会根据情况做好padding的tensor。
(2)会将batch_x, batch_y中tensor移动到model所在的device
(3)根据model.forward的参数列表, 从batch_x中取出需要传递给forward的数据。
(4)获取model.forward的输出结果pred_dict,并与batch_y一起传递给loss函数, 求得loss
(5)对loss进行反向梯度并更新参数
(6) 如果有验证集,则需要做验证
tester = Tester(model, dev_data,metric)
eval_results = tester.test()
(7) 如果eval_results是当前的最佳结果,则保存模型。
```

#### 其他
Trainer中还提供了"预跑"的功能。该功能通过check_code_level管理,如果check_code_level为-1,则不进行"预跑"。

check_code_level=0,1,2代表不同的提醒级别。
目前不同提醒级别对应的是对DataSet中设置为input或target但又没有使用的field的提醒级别。
0是忽略(默认);1是会warning发生了未使用field的情况;2是出现了unused会直接报错并退出运行

"预跑"的主要目的有两个:
- 防止train完了之后进行evaluation的时候出现错误。之前的train就白费了
- 由于存在"键映射",直接运行导致的报错可能不太容易debug,通过"预跑"过程的报错会有一些debug提示

"预跑"会进行以下的操作:
- 使用很小的batch_size, 检查batch_x中是否包含Model.forward所需要的参数。只会运行两个循环。
- 将Model.foward的输出pred_dict与batch_y输入到loss中, 并尝试backward. 不会更新参数,而且grad会被清零
如果传入了dev_data,还将进行metric的测试
- 创建Tester,并传入少量数据,检测是否可以正常运行

"预跑"操作是在Trainer初始化的时候执行的。

正常情况下,应该不需要改动"预跑"的代码。但如果你遇到bug或者有什么好的建议,欢迎在开发群或者github提交issue。



Loading…
Cancel
Save