 Dev0.4.0 (#149)
* 1. CRF增加支持bmeso类型的tag 2. vocabulary中增加注释
* BucketSampler增加一条错误检测
* 1.修改ClipGradientCallback的bug;删除LRSchedulerCallback中的print,之后应该传入pbar进行打印;2.增加MLP注释
* update MLP module
* 增加metric注释;修改trainer save过程中的bug
* Update README.md
fix tutorial link
* Add ENAS (Efficient Neural Architecture Search)
* add ignore_type in DataSet.add_field
* * AutoPadder will not pad when dtype is None
* add ignore_type in DataSet.apply
* 修复fieldarray中padder潜在bug
* 修复crf中typo; 以及可能导致数值不稳定的地方
* 修复CRF中可能存在的bug
* change two default init arguments of Trainer into None
* Changes to Callbacks:
* 给callback添加给定几个只读属性
* 通过manager设置这些属性
* 代码优化,减轻@transfer的负担
* * 将enas相关代码放到automl目录下
* 修复fast_param_mapping的一个bug
* Trainer添加自动创建save目录
* Vocabulary的打印,显示内容
* * 给vocabulary添加遍历方法
* 修复CRF为负数的bug
* add SQuAD metric
* add sigmoid activate function in MLP
* - add star transformer model
- add ConllLoader, for all kinds of conll-format files
- add JsonLoader, for json-format files
- add SSTLoader, for SST-2 & SST-5
- change Callback interface
- fix batch multi-process when killed
- add README to list models and their performance
* - fix test
* - fix callback & tests
* - update README
* 修改部分bug;调整callback
* 准备发布0.4.0版本“
* update readme
* support parallel loss
* 防止多卡的情况导致无法正确计算loss“
* update advance_tutorial jupyter notebook
* 1. 在embedding_loader中增加新的读取函数load_with_vocab(), load_without_vocab, 比之前的函数改变主要在(1)不再需要传入embed_dim(2)自动判断当前是word2vec还是glove.
2. vocabulary增加from_dataset(), index_dataset()函数。避免需要多行写index dataset的问题。
3. 在utils中新增一个cache_result()修饰器,用于cache函数的返回值。
4. callback中新增update_every属性
* 1.DataSet.apply()报错时提供错误的index
2.Vocabulary.from_dataset(), index_dataset()提供报错时的vocab顺序
3.embedloader在embed读取时遇到不规则的数据跳过这一行.
* update attention
* doc tools
* fix some doc errors
* 修改为中文注释,增加viterbi解码方法
* 样例版本
* - add pad sequence for lstm
- add csv, conll, json filereader
- update dataloader
- remove useless dataloader
- fix trainer loss print
- fix tests
* - fix test_tutorial
* 注释增加
* 测试文档
* 本地暂存
* 本地暂存
* 修改文档的顺序
* - add document
* 本地暂存
* update pooling
* update bert
* update documents in MLP
* update documents in snli
* combine self attention module to attention.py
* update documents on losses.py
* 对DataSet的文档进行更新
* update documents on metrics
* 1. 删除了LSTM中print的内容; 2. 将Trainer和Tester的use_cuda修改为了device; 3.补充Trainer的文档
* 增加对Trainer的注释
* 完善了trainer,callback等的文档; 修改了部分代码的命名以使得代码从文档中隐藏
* update char level encoder
* update documents on embedding.py
* - update doc
* 补充注释,并修改部分代码
* - update doc
- add get_embeddings
* 修改了文档配置项
* 修改embedding为init_embed初始化
* 1.增加对Trainer和Tester的多卡支持;
* - add test
- fix jsonloader
* 删除了注释教程
* 给 dataset 增加了get_field_names
* 修复bug
* - add Const
- fix bugs
* 修改部分注释
* - add model runner for easier test models
- add model tests
* 修改了 docs 的配置和架构
* 修改了核心部分的一大部分文档,TODO:
1. 完善 trainer 和 tester 部分的文档
2. 研究注释样例与测试
* core部分的注释基本检查完成
* 修改了 io 部分的注释
* 全部改为相对路径引用
* 全部改为相对路径引用
* small change
* 1. 从安装文件中删除api/automl的安装
2. metric中存在seq_len的bug
3. sampler中存在命名错误,已修改
* 修复 bug :兼容 cpu 版本的 PyTorch
TODO:其它地方可能也存在类似的 bug
* 修改文档中的引用部分
* 把 tqdm.autonotebook 换成tqdm.auto
* - fix batch & vocab
* 上传了文档文件 *.rst
* 上传了文档文件和若干 TODO
* 讨论并整合了若干模块
* core部分的测试和一些小修改
* 删除了一些冗余文档
* update init files
* update const files
* update const files
* 增加cnn的测试
* fix a little bug
* - update attention
- fix tests
* 完善测试
* 完成快速入门教程
* 修改了sequence_modeling 命名为 sequence_labeling 的文档
* 重新 apidoc 解决改名的遗留问题
* 修改文档格式
* 统一不同位置的seq_len_to_mask, 现统一到core.utils.seq_len_to_mask
* 增加了一行提示
* 在文档中展示 dataset_loader
* 提示 Dataset.read_csv 会被 CSVLoader 替换
* 完成 Callback 和 Trainer 之间的文档
* index更新了部分
* 删除冗余的print
* 删除用于分词的metric,因为有可能引起错误
* 修改文档中的中文名称
* 完成了详细介绍文档
* tutorial 的 ipynb 文件
* 修改了一些介绍文档
* 修改了 models 和 modules 的主页介绍
* 加上了 titlesonly 这个设置
* 修改了模块文档展示的标题
* 修改了 core 和 io 的开篇介绍
* 修改了 modules 和 models 开篇介绍
* 使用 .. todo:: 隐藏了可能被抽到文档中的 TODO 注释
* 修改了一些注释
* delete an old metric in test
* 修改 tutorials 的测试文件
* 把暂不发布的功能移到 legacy 文件夹
* 删除了不能运行的测试
* 修改 callback 的测试文件
* 删除了过时的教程和测试文件
* cache_results 参数的修改
* 修改 io 的测试文件; 删除了一些过时的测试
* 修复bug
* 修复无法通过test_utils.py的测试
* 修复与pytorch1.1中的padsequence的兼容问题; 修改Trainer的pbar
* 1. 修复metric中的bug; 2.增加metric测试
* add model summary
* 增加别名
* 删除encoder中的嵌套层
* 修改了 core 部分 import 的顺序,__all__ 暴露的内容
* 修改了 models 部分 import 的顺序,__all__ 暴露的内容
* 修改了文件名
* 修改了 modules 模块的__all__ 和 import
* fix var runn
* 增加vocab的clear方法
* 一些符合 PEP8 的微调
* 更新了cache_results的例子
* 1. 对callback中indices潜在None作出提示;2.DataSet支持通过List进行index
* 修改了一个typo
* 修改了 README.md
* update documents on bert
* update documents on encoder/bert
* 增加一个fitlog callback,实现与fitlog实验记录
* typo
* - update dataset_loader
* 增加了到 fitlog 文档的链接。
* 增加了 DataSet Loader 的文档
* - add star-transformer reproduction
6 years ago |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223 |
- # Code Modified from https://github.com/carpedm20/ENAS-pytorch
- """A module with NAS controller-related code."""
- import collections
- import os
-
- import torch
- import torch.nn.functional as F
-
- import fastNLP.automl.enas_utils as utils
- from fastNLP.automl.enas_utils import Node
-
-
- def _construct_dags(prev_nodes, activations, func_names, num_blocks):
- """Constructs a set of DAGs based on the actions, i.e., previous nodes and
- activation functions, sampled from the controller/policy pi.
-
- Args:
- prev_nodes: Previous node actions from the policy.
- activations: Activations sampled from the policy.
- func_names: Mapping from activation function names to functions.
- num_blocks: Number of blocks in the target RNN cell.
-
- Returns:
- A list of DAGs defined by the inputs.
-
- RNN cell DAGs are represented in the following way:
-
- 1. Each element (node) in a DAG is a list of `Node`s.
-
- 2. The `Node`s in the list dag[i] correspond to the subsequent nodes
- that take the output from node i as their own input.
-
- 3. dag[-1] is the node that takes input from x^{(t)} and h^{(t - 1)}.
- dag[-1] always feeds dag[0].
- dag[-1] acts as if `w_xc`, `w_hc`, `w_xh` and `w_hh` are its
- weights.
-
- 4. dag[N - 1] is the node that produces the hidden state passed to
- the next timestep. dag[N - 1] is also always a leaf node, and therefore
- is always averaged with the other leaf nodes and fed to the output
- decoder.
- """
- dags = []
- for nodes, func_ids in zip(prev_nodes, activations):
- dag = collections.defaultdict(list)
-
- # add first node
- dag[-1] = [Node(0, func_names[func_ids[0]])]
- dag[-2] = [Node(0, func_names[func_ids[0]])]
-
- # add following nodes
- for jdx, (idx, func_id) in enumerate(zip(nodes, func_ids[1:])):
- dag[utils.to_item(idx)].append(Node(jdx + 1, func_names[func_id]))
-
- leaf_nodes = set(range(num_blocks)) - dag.keys()
-
- # merge with avg
- for idx in leaf_nodes:
- dag[idx] = [Node(num_blocks, 'avg')]
-
- # This is actually y^{(t)}. h^{(t)} is node N - 1 in
- # the graph, where N Is the number of nodes. I.e., h^{(t)} takes
- # only one other node as its input.
- # last h[t] node
- last_node = Node(num_blocks + 1, 'h[t]')
- dag[num_blocks] = [last_node]
- dags.append(dag)
-
- return dags
-
-
- class Controller(torch.nn.Module):
- """Based on
- https://github.com/pytorch/examples/blob/master/word_language_model/model.py
-
- RL controllers do not necessarily have much to do with
- language models.
-
- Base the controller RNN on the GRU from:
- https://github.com/ikostrikov/pytorch-a2c-ppo-acktr/blob/master/model.py
- """
- def __init__(self, num_blocks=4, controller_hid=100, cuda=False):
- torch.nn.Module.__init__(self)
-
- # `num_tokens` here is just the activation function
- # for every even step,
- self.shared_rnn_activations = ['tanh', 'ReLU', 'identity', 'sigmoid']
- self.num_tokens = [len(self.shared_rnn_activations)]
- self.controller_hid = controller_hid
- self.use_cuda = cuda
- self.num_blocks = num_blocks
- for idx in range(num_blocks):
- self.num_tokens += [idx + 1, len(self.shared_rnn_activations)]
- self.func_names = self.shared_rnn_activations
-
- num_total_tokens = sum(self.num_tokens)
-
- self.encoder = torch.nn.Embedding(num_total_tokens,
- controller_hid)
- self.lstm = torch.nn.LSTMCell(controller_hid, controller_hid)
-
- # Perhaps these weights in the decoder should be
- # shared? At least for the activation functions, which all have the
- # same size.
- self.decoders = []
- for idx, size in enumerate(self.num_tokens):
- decoder = torch.nn.Linear(controller_hid, size)
- self.decoders.append(decoder)
-
- self._decoders = torch.nn.ModuleList(self.decoders)
-
- self.reset_parameters()
- self.static_init_hidden = utils.keydefaultdict(self.init_hidden)
-
- def _get_default_hidden(key):
- return utils.get_variable(
- torch.zeros(key, self.controller_hid),
- self.use_cuda,
- requires_grad=False)
-
- self.static_inputs = utils.keydefaultdict(_get_default_hidden)
-
- def reset_parameters(self):
- init_range = 0.1
- for param in self.parameters():
- param.data.uniform_(-init_range, init_range)
- for decoder in self.decoders:
- decoder.bias.data.fill_(0)
-
- def forward(self, # pylint:disable=arguments-differ
- inputs,
- hidden,
- block_idx,
- is_embed):
- if not is_embed:
- embed = self.encoder(inputs)
- else:
- embed = inputs
-
- hx, cx = self.lstm(embed, hidden)
- logits = self.decoders[block_idx](hx)
-
- logits /= 5.0
-
- # # exploration
- # if self.args.mode == 'train':
- # logits = (2.5 * F.tanh(logits))
-
- return logits, (hx, cx)
-
- def sample(self, batch_size=1, with_details=False, save_dir=None):
- """Samples a set of `args.num_blocks` many computational nodes from the
- controller, where each node is made up of an activation function, and
- each node except the last also includes a previous node.
- """
- if batch_size < 1:
- raise Exception(f'Wrong batch_size: {batch_size} < 1')
-
- # [B, L, H]
- inputs = self.static_inputs[batch_size]
- hidden = self.static_init_hidden[batch_size]
-
- activations = []
- entropies = []
- log_probs = []
- prev_nodes = []
- # The RNN controller alternately outputs an activation,
- # followed by a previous node, for each block except the last one,
- # which only gets an activation function. The last node is the output
- # node, and its previous node is the average of all leaf nodes.
- for block_idx in range(2*(self.num_blocks - 1) + 1):
- logits, hidden = self.forward(inputs,
- hidden,
- block_idx,
- is_embed=(block_idx == 0))
-
- probs = F.softmax(logits, dim=-1)
- log_prob = F.log_softmax(logits, dim=-1)
- # .mean() for entropy?
- entropy = -(log_prob * probs).sum(1, keepdim=False)
-
- action = probs.multinomial(num_samples=1).data
- selected_log_prob = log_prob.gather(
- 1, utils.get_variable(action, requires_grad=False))
-
- # why the [:, 0] here? Should it be .squeeze(), or
- # .view()? Same below with `action`.
- entropies.append(entropy)
- log_probs.append(selected_log_prob[:, 0])
-
- # 0: function, 1: previous node
- mode = block_idx % 2
- inputs = utils.get_variable(
- action[:, 0] + sum(self.num_tokens[:mode]),
- requires_grad=False)
-
- if mode == 0:
- activations.append(action[:, 0])
- elif mode == 1:
- prev_nodes.append(action[:, 0])
-
- prev_nodes = torch.stack(prev_nodes).transpose(0, 1)
- activations = torch.stack(activations).transpose(0, 1)
-
- dags = _construct_dags(prev_nodes,
- activations,
- self.func_names,
- self.num_blocks)
-
- if save_dir is not None:
- for idx, dag in enumerate(dags):
- utils.draw_network(dag,
- os.path.join(save_dir, f'graph{idx}.png'))
-
- if with_details:
- return dags, torch.cat(log_probs), torch.cat(entropies)
-
- return dags
-
- def init_hidden(self, batch_size):
- zeros = torch.zeros(batch_size, self.controller_hid)
- return (utils.get_variable(zeros, self.use_cuda, requires_grad=False),
- utils.get_variable(zeros.clone(), self.use_cuda, requires_grad=False))
|