Browse Source

[verify] add data source in readme

tags/v0.4.10
wyg 5 years ago
parent
commit
216efb446f
2 changed files with 8 additions and 1 deletions
  1. +7
    -0
      reproduction/text_classification/README.md
  2. +1
    -1
      reproduction/text_classification/train_char_cnn.py

+ 7
- 0
reproduction/text_classification/README.md View File

@@ -11,6 +11,13 @@ LSTM+self_attention:论文链接[A Structured Self-attentive Sentence Embedding]


AWD-LSTM:论文链接[Regularizing and Optimizing LSTM Language Models](https://arxiv.org/pdf/1708.02182.pdf) AWD-LSTM:论文链接[Regularizing and Optimizing LSTM Language Models](https://arxiv.org/pdf/1708.02182.pdf)


#数据集来源
IMDB:http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
SST-2:https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8
SST:https://nlp.stanford.edu/sentiment/trainDevTestTrees_PTB.zip
yelp_full:https://drive.google.com/drive/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M
yelp_polarity:https://drive.google.com/drive/folders/0Bz8a_Dbh9Qhbfll6bVpmNUtUcFdjYmF2SEpmZUZUcVNiMUw1TWN6RDV3a0JHT3kxLVhVR2M

# 数据集及复现结果汇总 # 数据集及复现结果汇总


使用fastNLP复现的结果vs论文汇报结果(/前为fastNLP实现,后面为论文报道,-表示论文没有在该数据集上列出结果) 使用fastNLP复现的结果vs论文汇报结果(/前为fastNLP实现,后面为论文报道,-表示论文没有在该数据集上列出结果)


+ 1
- 1
reproduction/text_classification/train_char_cnn.py View File

@@ -203,7 +203,7 @@ callbacks.append(
def train(model,datainfo,loss,metrics,optimizer,num_epochs=100): def train(model,datainfo,loss,metrics,optimizer,num_epochs=100):
trainer = Trainer(datainfo.datasets['train'], model, optimizer=optimizer, loss=loss(target='target'),batch_size=ops.batch_size, trainer = Trainer(datainfo.datasets['train'], model, optimizer=optimizer, loss=loss(target='target'),batch_size=ops.batch_size,
metrics=[metrics(target='target')], dev_data=datainfo.datasets['test'], device=[0,1,2], check_code_level=-1, metrics=[metrics(target='target')], dev_data=datainfo.datasets['test'], device=[0,1,2], check_code_level=-1,
n_epochs=num_epochs)
n_epochs=num_epochs,callbacks=callbacks)
print(trainer.train()) print(trainer.train())






Loading…
Cancel
Save