Browse Source

Update

main
Fafa-DL 3 years ago
parent
commit
bb7fc469a1
16 changed files with 1309 additions and 46807 deletions
  1. +0
    -41024
      05 Transformer/范例/HW03.ipynb
  2. +0
    -5783
      05 Transformer/范例/HW04.ipynb
  3. +0
    -0
      范例/Colab/Google_Colab_Tutorial.ipynb
  4. +0
    -0
      范例/Colab/Google_Colab_Tutorial.pdf
  5. +0
    -0
      范例/HW01/HW01.ipynb
  6. +0
    -0
      范例/HW01/HW01.pdf
  7. +0
    -0
      范例/HW02/HW02-1.ipynb
  8. +0
    -0
      范例/HW02/HW02-2.ipynb
  9. BIN
      范例/HW02/HW02.pdf
  10. +536
    -0
      范例/HW03/HW03.ipynb
  11. BIN
      范例/HW03/HW03.pdf
  12. +773
    -0
      范例/HW04/HW04.ipynb
  13. BIN
      范例/HW04/HW04.pdf
  14. +0
    -0
      范例/Pytorch/Pytorch_Tutorial.ipynb
  15. +0
    -0
      范例/Pytorch/Pytorch_Tutorial_1.pdf
  16. +0
    -0
      范例/Pytorch/Pytorch_Tutorial_2.pdf

+ 0
- 41024
05 Transformer/范例/HW03.ipynb
File diff suppressed because it is too large
View File


+ 0
- 5783
05 Transformer/范例/HW04.ipynb
File diff suppressed because it is too large
View File


01 Introduction/范例/Colab/Google_Colab_Tutorial.ipynb → 范例/Colab/Google_Colab_Tutorial.ipynb View File


01 Introduction/范例/Colab/Google_Colab_Tutorial.pdf → 范例/Colab/Google_Colab_Tutorial.pdf View File


01 Introduction/范例/HW01/HW01.ipynb → 范例/HW01/HW01.ipynb View File


01 Introduction/范例/HW01/HW01.pdf → 范例/HW01/HW01.pdf View File


02 Deep Learning/范例/HW02/HW02-1.ipynb → 范例/HW02/HW02-1.ipynb View File


02 Deep Learning/范例/HW02/HW02-2.ipynb → 范例/HW02/HW02-2.ipynb View File


BIN
02 Deep Learning/范例/HW02/HW02.pdf → 范例/HW02/HW02.pdf View File


+ 536
- 0
范例/HW03/HW03.ipynb View File

@@ -0,0 +1,536 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "HW03.ipynb",
"provenance": [],
"collapsed_sections": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "D_a2USyd4giE"
},
"source": [
"# **Homework 3 - Convolutional Neural Network**\n",
"\n",
"This is the example code of homework 3 of the machine learning course by Prof. Hung-yi Lee.\n",
"\n",
"In this homework, you are required to build a convolutional neural network for image classification, possibly with some advanced training tips.\n",
"\n",
"\n",
"There are three levels here:\n",
"\n",
"**Easy**: Build a simple convolutional neural network as the baseline. (2 pts)\n",
"\n",
"**Medium**: Design a better architecture or adopt different data augmentations to improve the performance. (2 pts)\n",
"\n",
"**Hard**: Utilize provided unlabeled data to obtain better results. (2 pts)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VHpJocsDr6iA"
},
"source": [
"## **About the Dataset**\n",
"\n",
"The dataset used here is food-11, a collection of food images in 11 classes.\n",
"\n",
"For the requirement in the homework, TAs slightly modified the data.\n",
"Please DO NOT access the original fully-labeled training data or testing labels.\n",
"\n",
"Also, the modified dataset is for this course only, and any further distribution or commercial use is forbidden."
]
},
{
"cell_type": "code",
"metadata": {
"id": "zhzdomRTOKoJ"
},
"source": [
"# Download the dataset\n",
"# You may choose where to download the data.\n",
"\n",
"# Google Drive\n",
"!gdown --id '1awF7pZ9Dz7X1jn1_QAiKN-_v56veCEKy' --output food-11.zip\n",
"\n",
"# Dropbox\n",
"# !wget https://www.dropbox.com/s/m9q6273jl3djall/food-11.zip -O food-11.zip\n",
"\n",
"# MEGA\n",
"# !sudo apt install megatools\n",
"# !megadl \"https://mega.nz/#!zt1TTIhK!ZuMbg5ZjGWzWX1I6nEUbfjMZgCmAgeqJlwDkqdIryfg\"\n",
"\n",
"# Unzip the dataset.\n",
"# This may take some time.\n",
"!unzip -q food-11.zip"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "BBVSCWWhp6uq"
},
"source": [
"## **Import Packages**\n",
"\n",
"First, we need to import packages that will be used later.\n",
"\n",
"In this homework, we highly rely on **torchvision**, a library of PyTorch."
]
},
{
"cell_type": "code",
"metadata": {
"id": "9sVrKci4PUFW"
},
"source": [
"# Import necessary packages.\n",
"import numpy as np\n",
"import torch\n",
"import torch.nn as nn\n",
"import torchvision.transforms as transforms\n",
"from PIL import Image\n",
"# \"ConcatDataset\" and \"Subset\" are possibly useful when doing semi-supervised learning.\n",
"from torch.utils.data import ConcatDataset, DataLoader, Subset\n",
"from torchvision.datasets import DatasetFolder\n",
"\n",
"# This is for the progress bar.\n",
"from tqdm.auto import tqdm"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "F0i9ZCPrOVN_"
},
"source": [
"## **Dataset, Data Loader, and Transforms**\n",
"\n",
"Torchvision provides lots of useful utilities for image preprocessing, data wrapping as well as data augmentation.\n",
"\n",
"Here, since our data are stored in folders by class labels, we can directly apply **torchvision.datasets.DatasetFolder** for wrapping data without much effort.\n",
"\n",
"Please refer to [PyTorch official website](https://pytorch.org/vision/stable/transforms.html) for details about different transforms."
]
},
{
"cell_type": "code",
"metadata": {
"id": "gKd2abixQghI"
},
"source": [
"# It is important to do data augmentation in training.\n",
"# However, not every augmentation is useful.\n",
"# Please think about what kind of augmentation is helpful for food recognition.\n",
"train_tfm = transforms.Compose([\n",
" # Resize the image into a fixed shape (height = width = 128)\n",
" transforms.Resize((128, 128)),\n",
" # You may add some transforms here.\n",
" # ToTensor() should be the last one of the transforms.\n",
" transforms.ToTensor(),\n",
"])\n",
"\n",
"# We don't need augmentations in testing and validation.\n",
"# All we need here is to resize the PIL image and transform it into Tensor.\n",
"test_tfm = transforms.Compose([\n",
" transforms.Resize((128, 128)),\n",
" transforms.ToTensor(),\n",
"])\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "qz6jeMnkQl0_"
},
"source": [
"# Batch size for training, validation, and testing.\n",
"# A greater batch size usually gives a more stable gradient.\n",
"# But the GPU memory is limited, so please adjust it carefully.\n",
"batch_size = 128\n",
"\n",
"# Construct datasets.\n",
"# The argument \"loader\" tells how torchvision reads the data.\n",
"train_set = DatasetFolder(\"food-11/training/labeled\", loader=lambda x: Image.open(x), extensions=\"jpg\", transform=train_tfm)\n",
"valid_set = DatasetFolder(\"food-11/validation\", loader=lambda x: Image.open(x), extensions=\"jpg\", transform=test_tfm)\n",
"unlabeled_set = DatasetFolder(\"food-11/training/unlabeled\", loader=lambda x: Image.open(x), extensions=\"jpg\", transform=train_tfm)\n",
"test_set = DatasetFolder(\"food-11/testing\", loader=lambda x: Image.open(x), extensions=\"jpg\", transform=test_tfm)\n",
"\n",
"# Construct data loaders.\n",
"train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True, num_workers=8, pin_memory=True)\n",
"valid_loader = DataLoader(valid_set, batch_size=batch_size, shuffle=True, num_workers=8, pin_memory=True)\n",
"test_loader = DataLoader(test_set, batch_size=batch_size, shuffle=False)"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "j9YhZo7POPYG"
},
"source": [
"## **Model**\n",
"\n",
"The basic model here is simply a stack of convolutional layers followed by some fully-connected layers.\n",
"\n",
"Since there are three channels for a color image (RGB), the input channels of the network must be three.\n",
"In each convolutional layer, typically the channels of inputs grow, while the height and width shrink (or remain unchanged, according to some hyperparameters like stride and padding).\n",
"\n",
"Before fed into fully-connected layers, the feature map must be flattened into a single one-dimensional vector (for each image).\n",
"These features are then transformed by the fully-connected layers, and finally, we obtain the \"logits\" for each class.\n",
"\n",
"### **WARNING -- You Must Know**\n",
"You are free to modify the model architecture here for further improvement.\n",
"However, if you want to use some well-known architectures such as ResNet50, please make sure **NOT** to load the pre-trained weights.\n",
"Using such pre-trained models is considered cheating and therefore you will be punished.\n",
"Similarly, it is your responsibility to make sure no pre-trained weights are used if you use **torch.hub** to load any modules.\n",
"\n",
"For example, if you use ResNet-18 as your model:\n",
"\n",
"model = torchvision.models.resnet18(pretrained=**False**) → This is fine.\n",
"\n",
"model = torchvision.models.resnet18(pretrained=**True**) → This is **NOT** allowed."
]
},
{
"cell_type": "code",
"metadata": {
"id": "Y1c-GwrMQqMl"
},
"source": [
"class Classifier(nn.Module):\n",
" def __init__(self):\n",
" super(Classifier, self).__init__()\n",
" # The arguments for commonly used modules:\n",
" # torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride, padding)\n",
" # torch.nn.MaxPool2d(kernel_size, stride, padding)\n",
"\n",
" # input image size: [3, 128, 128]\n",
" self.cnn_layers = nn.Sequential(\n",
" nn.Conv2d(3, 64, 3, 1, 1),\n",
" nn.BatchNorm2d(64),\n",
" nn.ReLU(),\n",
" nn.MaxPool2d(2, 2, 0),\n",
"\n",
" nn.Conv2d(64, 128, 3, 1, 1),\n",
" nn.BatchNorm2d(128),\n",
" nn.ReLU(),\n",
" nn.MaxPool2d(2, 2, 0),\n",
"\n",
" nn.Conv2d(128, 256, 3, 1, 1),\n",
" nn.BatchNorm2d(256),\n",
" nn.ReLU(),\n",
" nn.MaxPool2d(4, 4, 0),\n",
" )\n",
" self.fc_layers = nn.Sequential(\n",
" nn.Linear(256 * 8 * 8, 256),\n",
" nn.ReLU(),\n",
" nn.Linear(256, 256),\n",
" nn.ReLU(),\n",
" nn.Linear(256, 11)\n",
" )\n",
"\n",
" def forward(self, x):\n",
" # input (x): [batch_size, 3, 128, 128]\n",
" # output: [batch_size, 11]\n",
"\n",
" # Extract features by convolutional layers.\n",
" x = self.cnn_layers(x)\n",
"\n",
" # The extracted feature map must be flatten before going to fully-connected layers.\n",
" x = x.flatten(1)\n",
"\n",
" # The features are transformed by fully-connected layers to obtain the final logits.\n",
" x = self.fc_layers(x)\n",
" return x"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "aEnGbriXORN3"
},
"source": [
"## **Training**\n",
"\n",
"You can finish supervised learning by simply running the provided code without any modification.\n",
"\n",
"The function \"get_pseudo_labels\" is used for semi-supervised learning.\n",
"It is expected to get better performance if you use unlabeled data for semi-supervised learning.\n",
"However, you have to implement the function on your own and need to adjust several hyperparameters manually.\n",
"\n",
"For more details about semi-supervised learning, please refer to [Prof. Lee's slides](https://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2016/Lecture/semi%20(v3).pdf).\n",
"\n",
"Again, please notice that utilizing external data (or pre-trained model) for training is **prohibited**."
]
},
{
"cell_type": "code",
"metadata": {
"id": "swlf5EwA-hxA"
},
"source": [
"def get_pseudo_labels(dataset, model, threshold=0.65):\n",
" # This functions generates pseudo-labels of a dataset using given model.\n",
" # It returns an instance of DatasetFolder containing images whose prediction confidences exceed a given threshold.\n",
" # You are NOT allowed to use any models trained on external data for pseudo-labeling.\n",
" device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
"\n",
" # Construct a data loader.\n",
" data_loader = DataLoader(dataset, batch_size=batch_size, shuffle=False)\n",
"\n",
" # Make sure the model is in eval mode.\n",
" model.eval()\n",
" # Define softmax function.\n",
" softmax = nn.Softmax(dim=-1)\n",
"\n",
" # Iterate over the dataset by batches.\n",
" for batch in tqdm(data_loader):\n",
" img, _ = batch\n",
"\n",
" # Forward the data\n",
" # Using torch.no_grad() accelerates the forward process.\n",
" with torch.no_grad():\n",
" logits = model(img.to(device))\n",
"\n",
" # Obtain the probability distributions by applying softmax on logits.\n",
" probs = softmax(logits)\n",
"\n",
" # ---------- TODO ----------\n",
" # Filter the data and construct a new dataset.\n",
"\n",
" # # Turn off the eval mode.\n",
" model.train()\n",
" return dataset"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "PHaFE-8oQtkC"
},
"source": [
"# \"cuda\" only when GPUs are available.\n",
"device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
"\n",
"# Initialize a model, and put it on the device specified.\n",
"model = Classifier().to(device)\n",
"model.device = device\n",
"\n",
"# For the classification task, we use cross-entropy as the measurement of performance.\n",
"criterion = nn.CrossEntropyLoss()\n",
"\n",
"# Initialize optimizer, you may fine-tune some hyperparameters such as learning rate on your own.\n",
"optimizer = torch.optim.Adam(model.parameters(), lr=0.0003, weight_decay=1e-5)\n",
"\n",
"# The number of training epochs.\n",
"n_epochs = 80\n",
"\n",
"# Whether to do semi-supervised learning.\n",
"do_semi = False\n",
"\n",
"for epoch in range(n_epochs):\n",
" # ---------- TODO ----------\n",
" # In each epoch, relabel the unlabeled dataset for semi-supervised learning.\n",
" # Then you can combine the labeled dataset and pseudo-labeled dataset for the training.\n",
" if do_semi:\n",
" # Obtain pseudo-labels for unlabeled data using trained model.\n",
" pseudo_set = get_pseudo_labels(unlabeled_set, model)\n",
"\n",
" # Construct a new dataset and a data loader for training.\n",
" # This is used in semi-supervised learning only.\n",
" concat_dataset = ConcatDataset([train_set, pseudo_set])\n",
" train_loader = DataLoader(concat_dataset, batch_size=batch_size, shuffle=True, num_workers=8, pin_memory=True)\n",
"\n",
" # ---------- Training ----------\n",
" # Make sure the model is in train mode before training.\n",
" model.train()\n",
"\n",
" # These are used to record information in training.\n",
" train_loss = []\n",
" train_accs = []\n",
"\n",
" # Iterate the training set by batches.\n",
" for batch in tqdm(train_loader):\n",
"\n",
" # A batch consists of image data and corresponding labels.\n",
" imgs, labels = batch\n",
"\n",
" # Forward the data. (Make sure data and model are on the same device.)\n",
" logits = model(imgs.to(device))\n",
"\n",
" # Calculate the cross-entropy loss.\n",
" # We don't need to apply softmax before computing cross-entropy as it is done automatically.\n",
" loss = criterion(logits, labels.to(device))\n",
"\n",
" # Gradients stored in the parameters in the previous step should be cleared out first.\n",
" optimizer.zero_grad()\n",
"\n",
" # Compute the gradients for parameters.\n",
" loss.backward()\n",
"\n",
" # Clip the gradient norms for stable training.\n",
" grad_norm = nn.utils.clip_grad_norm_(model.parameters(), max_norm=10)\n",
"\n",
" # Update the parameters with computed gradients.\n",
" optimizer.step()\n",
"\n",
" # Compute the accuracy for current batch.\n",
" acc = (logits.argmax(dim=-1) == labels.to(device)).float().mean()\n",
"\n",
" # Record the loss and accuracy.\n",
" train_loss.append(loss.item())\n",
" train_accs.append(acc)\n",
"\n",
" # The average loss and accuracy of the training set is the average of the recorded values.\n",
" train_loss = sum(train_loss) / len(train_loss)\n",
" train_acc = sum(train_accs) / len(train_accs)\n",
"\n",
" # Print the information.\n",
" print(f\"[ Train | {epoch + 1:03d}/{n_epochs:03d} ] loss = {train_loss:.5f}, acc = {train_acc:.5f}\")\n",
"\n",
" # ---------- Validation ----------\n",
" # Make sure the model is in eval mode so that some modules like dropout are disabled and work normally.\n",
" model.eval()\n",
"\n",
" # These are used to record information in validation.\n",
" valid_loss = []\n",
" valid_accs = []\n",
"\n",
" # Iterate the validation set by batches.\n",
" for batch in tqdm(valid_loader):\n",
"\n",
" # A batch consists of image data and corresponding labels.\n",
" imgs, labels = batch\n",
"\n",
" # We don't need gradient in validation.\n",
" # Using torch.no_grad() accelerates the forward process.\n",
" with torch.no_grad():\n",
" logits = model(imgs.to(device))\n",
"\n",
" # We can still compute the loss (but not the gradient).\n",
" loss = criterion(logits, labels.to(device))\n",
"\n",
" # Compute the accuracy for current batch.\n",
" acc = (logits.argmax(dim=-1) == labels.to(device)).float().mean()\n",
"\n",
" # Record the loss and accuracy.\n",
" valid_loss.append(loss.item())\n",
" valid_accs.append(acc)\n",
"\n",
" # The average loss and accuracy for entire validation set is the average of the recorded values.\n",
" valid_loss = sum(valid_loss) / len(valid_loss)\n",
" valid_acc = sum(valid_accs) / len(valid_accs)\n",
"\n",
" # Print the information.\n",
" print(f\"[ Valid | {epoch + 1:03d}/{n_epochs:03d} ] loss = {valid_loss:.5f}, acc = {valid_acc:.5f}\")"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "2o1oCMXy61_3"
},
"source": [
"## **Testing**\n",
"\n",
"For inference, we need to make sure the model is in eval mode, and the order of the dataset should not be shuffled (\"shuffle=False\" in test_loader).\n",
"\n",
"Last but not least, don't forget to save the predictions into a single CSV file.\n",
"The format of CSV file should follow the rules mentioned in the slides.\n",
"\n",
"### **WARNING -- Keep in Mind**\n",
"\n",
"Cheating includes but not limited to:\n",
"1. using testing labels,\n",
"2. submitting results to previous Kaggle competitions,\n",
"3. sharing predictions with others,\n",
"4. copying codes from any creatures on Earth,\n",
"5. asking other people to do it for you.\n",
"\n",
"Any violations bring you punishments from getting a discount on the final grade to failing the course.\n",
"\n",
"It is your responsibility to check whether your code violates the rules.\n",
"When citing codes from the Internet, you should know what these codes exactly do.\n",
"You will **NOT** be tolerated if you break the rule and claim you don't know what these codes do.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "4HznI9_-ocrq"
},
"source": [
"# Make sure the model is in eval mode.\n",
"# Some modules like Dropout or BatchNorm affect if the model is in training mode.\n",
"model.eval()\n",
"\n",
"# Initialize a list to store the predictions.\n",
"predictions = []\n",
"\n",
"# Iterate the testing set by batches.\n",
"for batch in tqdm(test_loader):\n",
" # A batch consists of image data and corresponding labels.\n",
" # But here the variable \"labels\" is useless since we do not have the ground-truth.\n",
" # If printing out the labels, you will find that it is always 0.\n",
" # This is because the wrapper (DatasetFolder) returns images and labels for each batch,\n",
" # so we have to create fake labels to make it work normally.\n",
" imgs, labels = batch\n",
"\n",
" # We don't need gradient in testing, and we don't even have labels to compute loss.\n",
" # Using torch.no_grad() accelerates the forward process.\n",
" with torch.no_grad():\n",
" logits = model(imgs.to(device))\n",
"\n",
" # Take the class with greatest logit as prediction and record it.\n",
" predictions.extend(logits.argmax(dim=-1).cpu().numpy().tolist())"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "3t2q2Th85ZUE"
},
"source": [
"# Save predictions into the file.\n",
"with open(\"predict.csv\", \"w\") as f:\n",
"\n",
" # The first row must be \"Id, Category\"\n",
" f.write(\"Id,Category\\n\")\n",
"\n",
" # For the rest of the rows, each image id corresponds to a predicted class.\n",
" for i, pred in enumerate(predictions):\n",
" f.write(f\"{i},{pred}\\n\")"
],
"execution_count": null,
"outputs": []
}
]
}

BIN
05 Transformer/范例/HW03.pdf → 范例/HW03/HW03.pdf View File


+ 773
- 0
范例/HW04/HW04.ipynb View File

@@ -0,0 +1,773 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"accelerator": "GPU",
"colab": {
"name": "HW04.ipynb",
"provenance": [],
"collapsed_sections": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "zC5KwRyl6Flp"
},
"source": [
"# Task description\n",
"- Classify the speakers of given features.\n",
"- Main goal: Learn how to use transformer.\n",
"- Baselines:\n",
" - Easy: Run sample code and know how to use transformer.\n",
" - Medium: Know how to adjust parameters of transformer.\n",
" - Hard: Construct [conformer](https://arxiv.org/abs/2005.08100) which is a variety of transformer. "
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "TPDoreyypeJE"
},
"source": [
"# Download dataset"
]
},
{
"cell_type": "code",
"metadata": {
"id": "QvpaILXnJIcw"
},
"source": [
"\"\"\"\n",
" For Google drive, You can download data form any link below.\n",
" If a link fails, please use another one.\n",
"\"\"\"\n",
"\"\"\" Download link 1 of Google drive \"\"\"\n",
"# !gdown --id '1T0RPnu-Sg5eIPwQPfYysipfcz81MnsYe' --output Dataset.zip\n",
"\"\"\" Download link 2 of Google drive \"\"\"\n",
"# !gdown --id '1CtHZhJ-mTpNsO-MqvAPIi4Yrt3oSBXYV' --output Dataset.zip\n",
"\"\"\" Download link 3 of Google drive \"\"\"\n",
"# !gdown --id '14hmoMgB1fe6v50biIceKyndyeYABGrRq' --output Dataset.zip\n",
"\"\"\" Download link 4 of Google drive \"\"\"\n",
"# !gdown --id '1e9x-Pjl3n7-9tK9LS_WjiMo2lru4UBH9' --output Dataset.zip\n",
"\"\"\" Download link 5 of Google drive \"\"\"\n",
"# !gdown --id '10TC0g46bcAz_jkiMl65zNmwttT4RiRgY' --output Dataset.zip\n",
"\"\"\" Download link 6 of Google drive \"\"\"\n",
"# !gdown --id '1MUGBvG_JjqO0C2JYHuyV3B0lvaf1kWIm' --output Dataset.zip\n",
"\"\"\" Download link 7 of Google drive \"\"\"\n",
"# !gdown --id '18M91P5DHwILNyOlssZ57AiPOR0OwutOM' --output Dataset.zip\n",
"\"\"\" For Google drive, you can unzip the data by the command below. \"\"\"\n",
"# !unzip Dataset.zip\n",
"\n",
"\"\"\"\n",
" For Dropbox, we split dataset into five files. \n",
" Please download all of them.\n",
"\"\"\"\n",
"# If Dropbox is not work. Please use google drive.\n",
"# !wget https://www.dropbox.com/s/vw324newiku0sz0/Dataset.tar.gz.aa?dl=0\n",
"# !wget https://www.dropbox.com/s/z840g69e7lnkayo/Dataset.tar.gz.ab?dl=0\n",
"# !wget https://www.dropbox.com/s/hl081e1ggonio81/Dataset.tar.gz.ac?dl=0\n",
"# !wget https://www.dropbox.com/s/fh3zd8ow668c4th/Dataset.tar.gz.ad?dl=0\n",
"# !wget https://www.dropbox.com/s/ydzygoy2pv6gw9d/Dataset.tar.gz.ae?dl=0\n",
"# !cat Dataset.tar.gz.* | tar zxvf -\n",
"\n",
"\"\"\"\n",
" For Onedrive, we split dataset into five files. \n",
" Please download all of them.\n",
"\"\"\"\n",
"!wget --no-check-certificate \"https://onedrive.live.com/download?cid=10C95EE5FD151BFB&resid=10C95EE5FD151BFB%21106&authkey=ACB6opQR3CG9kmc\" -O Dataset.tar.gz.aa\n",
"!wget --no-check-certificate \"https://onedrive.live.com/download?cid=93DDDDD552E145DB&resid=93DDDDD552E145DB%21106&authkey=AP6EepjxSdvyV6Y\" -O Dataset.tar.gz.ab\n",
"!wget --no-check-certificate \"https://onedrive.live.com/download?cid=644545816461BCCC&resid=644545816461BCCC%21106&authkey=ALiefB0kI7Epb0Q\" -O Dataset.tar.gz.ac\n",
"!wget --no-check-certificate \"https://onedrive.live.com/download?cid=77CEBB3C3C512821&resid=77CEBB3C3C512821%21106&authkey=AAXCx4TTDYC0yjM\" -O Dataset.tar.gz.ad\n",
"!wget --no-check-certificate \"https://onedrive.live.com/download?cid=383D0E0146A11B02&resid=383D0E0146A11B02%21106&authkey=ALwVc4StVbig6QI\" -O Dataset.tar.gz.ae\n",
"!cat Dataset.tar.gz.* | tar zxvf -"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "v1gYr_aoNDue"
},
"source": [
"# Data"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Mz_NpuAipk3h"
},
"source": [
"## Dataset\n",
"- Original dataset is [Voxceleb1](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/).\n",
"- The [license](https://creativecommons.org/licenses/by/4.0/) and [complete version](https://www.robots.ox.ac.uk/~vgg/data/voxceleb/files/license.txt) of Voxceleb1.\n",
"- We randomly select 600 speakers from Voxceleb1.\n",
"- Then preprocess the raw waveforms into mel-spectrograms.\n",
"\n",
"- Args:\n",
" - data_dir: The path to the data directory.\n",
" - metadata_path: The path to the metadata.\n",
" - segment_len: The length of audio segment for training. \n",
"- The architecture of data directory \\\\\n",
" - data directory \\\\\n",
" |---- metadata.json \\\\\n",
" |---- testdata.json \\\\\n",
" |---- mapping.json \\\\\n",
" |---- uttr-{random string}.pt \\\\\n",
"\n",
"- The information in metadata\n",
" - \"n_mels\": The dimention of mel-spectrogram.\n",
" - \"speakers\": A dictionary. \n",
" - Key: speaker ids.\n",
" - value: \"feature_path\" and \"mel_len\"\n",
"\n",
"\n",
"For efficiency, we segment the mel-spectrograms into segments in the traing step."
]
},
{
"cell_type": "code",
"metadata": {
"id": "cd7hoGhYtbXQ"
},
"source": [
"import os\n",
"import json\n",
"import torch\n",
"import random\n",
"from pathlib import Path\n",
"from torch.utils.data import Dataset\n",
"from torch.nn.utils.rnn import pad_sequence\n",
" \n",
" \n",
"class myDataset(Dataset):\n",
" def __init__(self, data_dir, segment_len=128):\n",
" self.data_dir = data_dir\n",
" self.segment_len = segment_len\n",
" \n",
" # Load the mapping from speaker neme to their corresponding id. \n",
" mapping_path = Path(data_dir) / \"mapping.json\"\n",
" mapping = json.load(mapping_path.open())\n",
" self.speaker2id = mapping[\"speaker2id\"]\n",
" \n",
" # Load metadata of training data.\n",
" metadata_path = Path(data_dir) / \"metadata.json\"\n",
" metadata = json.load(open(metadata_path))[\"speakers\"]\n",
" \n",
" # Get the total number of speaker.\n",
" self.speaker_num = len(metadata.keys())\n",
" self.data = []\n",
" for speaker in metadata.keys():\n",
" for utterances in metadata[speaker]:\n",
" self.data.append([utterances[\"feature_path\"], self.speaker2id[speaker]])\n",
" \n",
" def __len__(self):\n",
" return len(self.data)\n",
" \n",
" def __getitem__(self, index):\n",
" feat_path, speaker = self.data[index]\n",
" # Load preprocessed mel-spectrogram.\n",
" mel = torch.load(os.path.join(self.data_dir, feat_path))\n",
" \n",
" # Segmemt mel-spectrogram into \"segment_len\" frames.\n",
" if len(mel) > self.segment_len:\n",
" # Randomly get the starting point of the segment.\n",
" start = random.randint(0, len(mel) - self.segment_len)\n",
" # Get a segment with \"segment_len\" frames.\n",
" mel = torch.FloatTensor(mel[start:start+self.segment_len])\n",
" else:\n",
" mel = torch.FloatTensor(mel)\n",
" # Turn the speaker id into long for computing loss later.\n",
" speaker = torch.FloatTensor([speaker]).long()\n",
" return mel, speaker\n",
" \n",
" def get_speaker_number(self):\n",
" return self.speaker_num"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "mqJxjoi_NGnB"
},
"source": [
"## Dataloader\n",
"- Split dataset into training dataset(90%) and validation dataset(10%).\n",
"- Create dataloader to iterate the data.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "zuT1AuFENI8t"
},
"source": [
"import torch\n",
"from torch.utils.data import DataLoader, random_split\n",
"from torch.nn.utils.rnn import pad_sequence\n",
"\n",
"\n",
"def collate_batch(batch):\n",
" # Process features within a batch.\n",
" \"\"\"Collate a batch of data.\"\"\"\n",
" mel, speaker = zip(*batch)\n",
" # Because we train the model batch by batch, we need to pad the features in the same batch to make their lengths the same.\n",
" mel = pad_sequence(mel, batch_first=True, padding_value=-20) # pad log 10^(-20) which is very small value.\n",
" # mel: (batch size, length, 40)\n",
" return mel, torch.FloatTensor(speaker).long()\n",
"\n",
"\n",
"def get_dataloader(data_dir, batch_size, n_workers):\n",
" \"\"\"Generate dataloader\"\"\"\n",
" dataset = myDataset(data_dir)\n",
" speaker_num = dataset.get_speaker_number()\n",
" # Split dataset into training dataset and validation dataset\n",
" trainlen = int(0.9 * len(dataset))\n",
" lengths = [trainlen, len(dataset) - trainlen]\n",
" trainset, validset = random_split(dataset, lengths)\n",
"\n",
" train_loader = DataLoader(\n",
" trainset,\n",
" batch_size=batch_size,\n",
" shuffle=True,\n",
" drop_last=True,\n",
" num_workers=n_workers,\n",
" pin_memory=True,\n",
" collate_fn=collate_batch,\n",
" )\n",
" valid_loader = DataLoader(\n",
" validset,\n",
" batch_size=batch_size,\n",
" num_workers=n_workers,\n",
" drop_last=True,\n",
" pin_memory=True,\n",
" collate_fn=collate_batch,\n",
" )\n",
"\n",
" return train_loader, valid_loader, speaker_num\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "X0x6eXiHpr4R"
},
"source": [
"# Model\n",
"- TransformerEncoderLayer:\n",
" - Base transformer encoder layer in [Attention Is All You Need](https://arxiv.org/abs/1706.03762)\n",
" - Parameters:\n",
" - d_model: the number of expected features of the input (required).\n",
"\n",
" - nhead: the number of heads of the multiheadattention models (required).\n",
"\n",
" - dim_feedforward: the dimension of the feedforward network model (default=2048).\n",
"\n",
" - dropout: the dropout value (default=0.1).\n",
"\n",
" - activation: the activation function of intermediate layer, relu or gelu (default=relu).\n",
"\n",
"- TransformerEncoder:\n",
" - TransformerEncoder is a stack of N transformer encoder layers\n",
" - Parameters:\n",
" - encoder_layer: an instance of the TransformerEncoderLayer() class (required).\n",
"\n",
" - num_layers: the number of sub-encoder-layers in the encoder (required).\n",
"\n",
" - norm: the layer normalization component (optional)."
]
},
{
"cell_type": "code",
"metadata": {
"id": "SHX4eVj4tjtd"
},
"source": [
"import torch\n",
"import torch.nn as nn\n",
"import torch.nn.functional as F\n",
"\n",
"\n",
"class Classifier(nn.Module):\n",
" def __init__(self, d_model=80, n_spks=600, dropout=0.1):\n",
" super().__init__()\n",
" # Project the dimension of features from that of input into d_model.\n",
" self.prenet = nn.Linear(40, d_model)\n",
" # TODO:\n",
" # Change Transformer to Conformer.\n",
" # https://arxiv.org/abs/2005.08100\n",
" self.encoder_layer = nn.TransformerEncoderLayer(\n",
" d_model=d_model, dim_feedforward=256, nhead=2\n",
" )\n",
" # self.encoder = nn.TransformerEncoder(self.encoder_layer, num_layers=2)\n",
"\n",
" # Project the the dimension of features from d_model into speaker nums.\n",
" self.pred_layer = nn.Sequential(\n",
" nn.Linear(d_model, d_model),\n",
" nn.ReLU(),\n",
" nn.Linear(d_model, n_spks),\n",
" )\n",
"\n",
" def forward(self, mels):\n",
" \"\"\"\n",
" args:\n",
" mels: (batch size, length, 40)\n",
" return:\n",
" out: (batch size, n_spks)\n",
" \"\"\"\n",
" # out: (batch size, length, d_model)\n",
" out = self.prenet(mels)\n",
" # out: (length, batch size, d_model)\n",
" out = out.permute(1, 0, 2)\n",
" # The encoder layer expect features in the shape of (length, batch size, d_model).\n",
" out = self.encoder_layer(out)\n",
" # out: (batch size, length, d_model)\n",
" out = out.transpose(0, 1)\n",
" # mean pooling\n",
" stats = out.mean(dim=1)\n",
"\n",
" # out: (batch, n_spks)\n",
" out = self.pred_layer(stats)\n",
" return out\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "-__DolPGpvDZ"
},
"source": [
"# Learning rate schedule\n",
"- For transformer architecture, the design of learning rate schedule is different from that of CNN.\n",
"- Previous works show that the warmup of learning rate is useful for training models with transformer architectures.\n",
"- The warmup schedule\n",
" - Set learning rate to 0 in the beginning.\n",
" - The learning rate increases linearly from 0 to initial learning rate during warmup period."
]
},
{
"cell_type": "code",
"metadata": {
"id": "K-0816BntqT9"
},
"source": [
"import math\n",
"\n",
"import torch\n",
"from torch.optim import Optimizer\n",
"from torch.optim.lr_scheduler import LambdaLR\n",
"\n",
"\n",
"def get_cosine_schedule_with_warmup(\n",
" optimizer: Optimizer,\n",
" num_warmup_steps: int,\n",
" num_training_steps: int,\n",
" num_cycles: float = 0.5,\n",
" last_epoch: int = -1,\n",
"):\n",
" \"\"\"\n",
" Create a schedule with a learning rate that decreases following the values of the cosine function between the\n",
" initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the\n",
" initial lr set in the optimizer.\n",
"\n",
" Args:\n",
" optimizer (:class:`~torch.optim.Optimizer`):\n",
" The optimizer for which to schedule the learning rate.\n",
" num_warmup_steps (:obj:`int`):\n",
" The number of steps for the warmup phase.\n",
" num_training_steps (:obj:`int`):\n",
" The total number of training steps.\n",
" num_cycles (:obj:`float`, `optional`, defaults to 0.5):\n",
" The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0\n",
" following a half-cosine).\n",
" last_epoch (:obj:`int`, `optional`, defaults to -1):\n",
" The index of the last epoch when resuming training.\n",
"\n",
" Return:\n",
" :obj:`torch.optim.lr_scheduler.LambdaLR` with the appropriate schedule.\n",
" \"\"\"\n",
"\n",
" def lr_lambda(current_step):\n",
" # Warmup\n",
" if current_step < num_warmup_steps:\n",
" return float(current_step) / float(max(1, num_warmup_steps))\n",
" # decadence\n",
" progress = float(current_step - num_warmup_steps) / float(\n",
" max(1, num_training_steps - num_warmup_steps)\n",
" )\n",
" return max(\n",
" 0.0, 0.5 * (1.0 + math.cos(math.pi * float(num_cycles) * 2.0 * progress))\n",
" )\n",
"\n",
" return LambdaLR(optimizer, lr_lambda, last_epoch)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "IP03FFo9K8DS"
},
"source": [
"# Model Function\n",
"- Model forward function."
]
},
{
"cell_type": "code",
"metadata": {
"id": "fohaLEFJK9-t"
},
"source": [
"import torch\n",
"\n",
"\n",
"def model_fn(batch, model, criterion, device):\n",
" \"\"\"Forward a batch through the model.\"\"\"\n",
"\n",
" mels, labels = batch\n",
" mels = mels.to(device)\n",
" labels = labels.to(device)\n",
"\n",
" outs = model(mels)\n",
"\n",
" loss = criterion(outs, labels)\n",
"\n",
" # Get the speaker id with highest probability.\n",
" preds = outs.argmax(1)\n",
" # Compute accuracy.\n",
" accuracy = torch.mean((preds == labels).float())\n",
"\n",
" return loss, accuracy\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "F7cg-YrzLQcf"
},
"source": [
"# Validate\n",
"- Calculate accuracy of the validation set."
]
},
{
"cell_type": "code",
"metadata": {
"id": "mD-_p6nWLO2L"
},
"source": [
"from tqdm import tqdm\n",
"import torch\n",
"\n",
"\n",
"def valid(dataloader, model, criterion, device): \n",
" \"\"\"Validate on validation set.\"\"\"\n",
"\n",
" model.eval()\n",
" running_loss = 0.0\n",
" running_accuracy = 0.0\n",
" pbar = tqdm(total=len(dataloader.dataset), ncols=0, desc=\"Valid\", unit=\" uttr\")\n",
"\n",
" for i, batch in enumerate(dataloader):\n",
" with torch.no_grad():\n",
" loss, accuracy = model_fn(batch, model, criterion, device)\n",
" running_loss += loss.item()\n",
" running_accuracy += accuracy.item()\n",
"\n",
" pbar.update(dataloader.batch_size)\n",
" pbar.set_postfix(\n",
" loss=f\"{running_loss / (i+1):.2f}\",\n",
" accuracy=f\"{running_accuracy / (i+1):.2f}\",\n",
" )\n",
"\n",
" pbar.close()\n",
" model.train()\n",
"\n",
" return running_accuracy / len(dataloader)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "noHXyal5p1W5"
},
"source": [
"# Main function"
]
},
{
"cell_type": "code",
"metadata": {
"id": "chRQE7oYtw62"
},
"source": [
"from tqdm import tqdm\n",
"\n",
"import torch\n",
"import torch.nn as nn\n",
"from torch.optim import AdamW\n",
"from torch.utils.data import DataLoader, random_split\n",
"\n",
"\n",
"def parse_args():\n",
" \"\"\"arguments\"\"\"\n",
" config = {\n",
" \"data_dir\": \"./Dataset\",\n",
" \"save_path\": \"model.ckpt\",\n",
" \"batch_size\": 32,\n",
" \"n_workers\": 8,\n",
" \"valid_steps\": 2000,\n",
" \"warmup_steps\": 1000,\n",
" \"save_steps\": 10000,\n",
" \"total_steps\": 70000,\n",
" }\n",
"\n",
" return config\n",
"\n",
"\n",
"def main(\n",
" data_dir,\n",
" save_path,\n",
" batch_size,\n",
" n_workers,\n",
" valid_steps,\n",
" warmup_steps,\n",
" total_steps,\n",
" save_steps,\n",
"):\n",
" \"\"\"Main function.\"\"\"\n",
" device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
" print(f\"[Info]: Use {device} now!\")\n",
"\n",
" train_loader, valid_loader, speaker_num = get_dataloader(data_dir, batch_size, n_workers)\n",
" train_iterator = iter(train_loader)\n",
" print(f\"[Info]: Finish loading data!\",flush = True)\n",
"\n",
" model = Classifier(n_spks=speaker_num).to(device)\n",
" criterion = nn.CrossEntropyLoss()\n",
" optimizer = AdamW(model.parameters(), lr=1e-3)\n",
" scheduler = get_cosine_schedule_with_warmup(optimizer, warmup_steps, total_steps)\n",
" print(f\"[Info]: Finish creating model!\",flush = True)\n",
"\n",
" best_accuracy = -1.0\n",
" best_state_dict = None\n",
"\n",
" pbar = tqdm(total=valid_steps, ncols=0, desc=\"Train\", unit=\" step\")\n",
"\n",
" for step in range(total_steps):\n",
" # Get data\n",
" try:\n",
" batch = next(train_iterator)\n",
" except StopIteration:\n",
" train_iterator = iter(train_loader)\n",
" batch = next(train_iterator)\n",
"\n",
" loss, accuracy = model_fn(batch, model, criterion, device)\n",
" batch_loss = loss.item()\n",
" batch_accuracy = accuracy.item()\n",
"\n",
" # Updata model\n",
" loss.backward()\n",
" optimizer.step()\n",
" scheduler.step()\n",
" optimizer.zero_grad()\n",
" \n",
" # Log\n",
" pbar.update()\n",
" pbar.set_postfix(\n",
" loss=f\"{batch_loss:.2f}\",\n",
" accuracy=f\"{batch_accuracy:.2f}\",\n",
" step=step + 1,\n",
" )\n",
"\n",
" # Do validation\n",
" if (step + 1) % valid_steps == 0:\n",
" pbar.close()\n",
"\n",
" valid_accuracy = valid(valid_loader, model, criterion, device)\n",
"\n",
" # keep the best model\n",
" if valid_accuracy > best_accuracy:\n",
" best_accuracy = valid_accuracy\n",
" best_state_dict = model.state_dict()\n",
"\n",
" pbar = tqdm(total=valid_steps, ncols=0, desc=\"Train\", unit=\" step\")\n",
"\n",
" # Save the best model so far.\n",
" if (step + 1) % save_steps == 0 and best_state_dict is not None:\n",
" torch.save(best_state_dict, save_path)\n",
" pbar.write(f\"Step {step + 1}, best model saved. (accuracy={best_accuracy:.4f})\")\n",
"\n",
" pbar.close()\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" main(**parse_args())\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "0R2rx3AyHpQ-"
},
"source": [
"# Inference"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pSuI3WY9Fz78"
},
"source": [
"## Dataset of inference"
]
},
{
"cell_type": "code",
"metadata": {
"id": "4evns0055Dsx"
},
"source": [
"import os\n",
"import json\n",
"import torch\n",
"from pathlib import Path\n",
"from torch.utils.data import Dataset\n",
"\n",
"\n",
"class InferenceDataset(Dataset):\n",
" def __init__(self, data_dir):\n",
" testdata_path = Path(data_dir) / \"testdata.json\"\n",
" metadata = json.load(testdata_path.open())\n",
" self.data_dir = data_dir\n",
" self.data = metadata[\"utterances\"]\n",
"\n",
" def __len__(self):\n",
" return len(self.data)\n",
"\n",
" def __getitem__(self, index):\n",
" utterance = self.data[index]\n",
" feat_path = utterance[\"feature_path\"]\n",
" mel = torch.load(os.path.join(self.data_dir, feat_path))\n",
"\n",
" return feat_path, mel\n",
"\n",
"\n",
"def inference_collate_batch(batch):\n",
" \"\"\"Collate a batch of data.\"\"\"\n",
" feat_paths, mels = zip(*batch)\n",
"\n",
" return feat_paths, torch.stack(mels)\n"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "oAinHBG1GIWv"
},
"source": [
"## Main funcrion of Inference"
]
},
{
"cell_type": "code",
"metadata": {
"id": "yQaTt7VDHoRI"
},
"source": [
"import json\n",
"import csv\n",
"from pathlib import Path\n",
"from tqdm.notebook import tqdm\n",
"\n",
"import torch\n",
"from torch.utils.data import DataLoader\n",
"\n",
"def parse_args():\n",
" \"\"\"arguments\"\"\"\n",
" config = {\n",
" \"data_dir\": \"./Dataset\",\n",
" \"model_path\": \"./model.ckpt\",\n",
" \"output_path\": \"./output.csv\",\n",
" }\n",
"\n",
" return config\n",
"\n",
"\n",
"def main(\n",
" data_dir,\n",
" model_path,\n",
" output_path,\n",
"):\n",
" \"\"\"Main function.\"\"\"\n",
" device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
" print(f\"[Info]: Use {device} now!\")\n",
"\n",
" mapping_path = Path(data_dir) / \"mapping.json\"\n",
" mapping = json.load(mapping_path.open())\n",
"\n",
" dataset = InferenceDataset(data_dir)\n",
" dataloader = DataLoader(\n",
" dataset,\n",
" batch_size=1,\n",
" shuffle=False,\n",
" drop_last=False,\n",
" num_workers=8,\n",
" collate_fn=inference_collate_batch,\n",
" )\n",
" print(f\"[Info]: Finish loading data!\",flush = True)\n",
"\n",
" speaker_num = len(mapping[\"id2speaker\"])\n",
" model = Classifier(n_spks=speaker_num).to(device)\n",
" model.load_state_dict(torch.load(model_path))\n",
" model.eval()\n",
" print(f\"[Info]: Finish creating model!\",flush = True)\n",
"\n",
" results = [[\"Id\", \"Category\"]]\n",
" for feat_paths, mels in tqdm(dataloader):\n",
" with torch.no_grad():\n",
" mels = mels.to(device)\n",
" outs = model(mels)\n",
" preds = outs.argmax(1).cpu().numpy()\n",
" for feat_path, pred in zip(feat_paths, preds):\n",
" results.append([feat_path, mapping[\"id2speaker\"][str(pred)]])\n",
" \n",
" with open(output_path, 'w', newline='') as csvfile:\n",
" writer = csv.writer(csvfile)\n",
" writer.writerows(results)\n",
"\n",
"\n",
"if __name__ == \"__main__\":\n",
" main(**parse_args())\n"
],
"execution_count": null,
"outputs": []
}
]
}

BIN
05 Transformer/范例/HW04.pdf → 范例/HW04/HW04.pdf View File


01 Introduction/范例/Pytorch/Pytorch_Tutorial.ipynb → 范例/Pytorch/Pytorch_Tutorial.ipynb View File


01 Introduction/范例/Pytorch/Pytorch_Tutorial_1.pdf → 范例/Pytorch/Pytorch_Tutorial_1.pdf View File


01 Introduction/范例/Pytorch/Pytorch_Tutorial_2.pdf → 范例/Pytorch/Pytorch_Tutorial_2.pdf View File


Loading…
Cancel
Save