Browse Source

Update Colab&Pytorch

main
Fafa-DL 2 years ago
parent
commit
1953e020e8
5 changed files with 1339 additions and 0 deletions
  1. +427
    -0
      2022 ML/00 Colab&Pytorch/Google_Colab_Tutorial.ipynb
  2. +0
    -0
      2022 ML/00 Colab&Pytorch/Google_Colab_Tutorial.pdf
  3. +0
    -0
      2022 ML/00 Colab&Pytorch/Pytorch_Tutorial_1.pdf
  4. +912
    -0
      2022 ML/00 Colab&Pytorch/Pytorch_Tutorial_2.ipynb
  5. +0
    -0
      2022 ML/00 Colab&Pytorch/Pytorch_Tutorial_2.pdf

+ 427
- 0
2022 ML/00 Colab&Pytorch/Google_Colab_Tutorial.ipynb View File

@@ -0,0 +1,427 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "ca2CpPPUvO-h"
},
"source": [
"# **Google Colab Tutorial**\n",
"\n",
"Video: https://youtu.be/YmPF0jrWn6Y\n",
"Should you have any question, contact TAs via <br/> ntu-ml-2022spring-ta@googlegroups.com\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xIN7RF4wjgHk"
},
"source": [
"<p><img alt=\"Colaboratory logo\" height=\"45px\" src=\"/img/colab_favicon.ico\" align=\"left\" hspace=\"10px\" vspace=\"0px\"></p>\n",
"\n",
"<h1>What is Colaboratory?</h1>\n",
"\n",
"Colaboratory, or \"Colab\" for short, allows you to write and execute Python in your browser, with \n",
"- Zero configuration required\n",
"- Free access to GPUs\n",
"- Easy sharing\n",
"\n",
"Whether you're a **student**, a **data scientist** or an **AI researcher**, Colab can make your work easier. Watch [Introduction to Colab](https://www.youtube.com/watch?v=inN8seMm7UI) to learn more, or just get started below!\n",
"\n",
"You can type python code in the code block, or use a leading exclamation mark ! to change the code block to bash environment to execute linux code."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "IrAxlhp3VBMD"
},
"source": [
"To utilize the free GPU provided by google, click on \"Runtime\"(執行階段) -> \"Change Runtime Type\"(變更執行階段類型). There are three options under \"Hardward Accelerator\"(硬體加速器), select \"GPU\". \n",
"* Doing this will restart the session, so make sure you change to the desired runtime before executing any code.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "CLUWxZKbvQpx",
"outputId": "ad5ee344-67fa-4e6f-e99f-0d7a289517a5"
},
"outputs": [
{
"data": {
"text/plain": [
"True"
]
},
"execution_count": 12,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import torch\n",
"torch.cuda.is_available() # is GPU available\n",
"# Outputs True if running with GPU"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "1jpUD08lMiDa",
"outputId": "ca16beaf-6200-46d7-d64c-b926eea744ba"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Fri Feb 18 10:43:31 2022 \n",
"+-----------------------------------------------------------------------------+\n",
"| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |\n",
"|-------------------------------+----------------------+----------------------+\n",
"| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |\n",
"| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |\n",
"| | | MIG M. |\n",
"|===============================+======================+======================|\n",
"| 0 Tesla K80 Off | 00000000:00:04.0 Off | 0 |\n",
"| N/A 70C P8 34W / 149W | 3MiB / 11441MiB | 0% Default |\n",
"| | | N/A |\n",
"+-------------------------------+----------------------+----------------------+\n",
" \n",
"+-----------------------------------------------------------------------------+\n",
"| Processes: |\n",
"| GPU GI CI PID Type Process name GPU Memory |\n",
"| ID ID Usage |\n",
"|=============================================================================|\n",
"| No running processes found |\n",
"+-----------------------------------------------------------------------------+\n"
]
}
],
"source": [
"# check allocated GPU type\n",
"!nvidia-smi"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "EAM_tPQAELh0"
},
"source": [
"**1. Download Files via google drive**\n",
"\n",
" A file stored in Google Drive has the following sharing link:\n",
"\n",
"https://drive.google.com/open?id=1sUr1x-GhJ_80vIGzVGEqFUSDYfwV50YW\n",
" \n",
" The random string after \"open?id=\" is the **file_id** <br />\n",
"![](https://i.imgur.com/77AeV88l.png)\n",
"\n",
" It is possible to download the file via Colab knowing the **file_id**, using the following command.\n",
"\n",
"\n",
"\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "XztYEj0oD7J3",
"outputId": "83ddde6e-d745-4a0f-bb8f-dab8243a1dca"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading...\n",
"From: https://drive.google.com/uc?id=1sUr1x-GhJ_80vIGzVGEqFUSDYfwV50YW\n",
"To: /content/ML2022/pikachu.png\n",
"\r 0% 0.00/204k [00:00<?, ?B/s]\r100% 204k/204k [00:00<00:00, 83.4MB/s]\n"
]
}
],
"source": [
"# Download the file with file_id \"sUr1x-GhJ_80vIGzVGEqFUSDYfwV50YW\", and rename it to pikachu.png\n",
"!gdown --id '1sUr1x-GhJ_80vIGzVGEqFUSDYfwV50YW' --output pikachu.png"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "Gg3T23LXG-eL",
"outputId": "0dd8a3ca-85d5-4bc0-d593-4a3ab1bbfc99"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"pikachu.png\n"
]
}
],
"source": [
"# List all the files under the working directory\n",
"!ls"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "38dcGQujOVWM"
},
"source": [
"Exclamation mark (!) starts a new shell, does the operations, and then kills that shell, while percentage (%) affects the process associated with the notebook"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dOQxjfAZAsys"
},
"source": [
"It can be seen that `pikachu.png` is saved the the current working directory. \n",
"\n",
"![](https://i.imgur.com/bonrOlgm.png)\n",
"\n",
"The working space is temporary, once you close the browser, the files will be gone.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "50uppJXgZwmW"
},
"source": [
"Double click to view image\n",
"\n",
"![](https://i.imgur.com/DTywPzAm.png)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "k_gmTo9NKtu9"
},
"source": [
"**2. Mounting Google Drive**\n",
"\n",
" One advantage of using google colab is that connection with other google services such as Google Drive is simple. By mounting google drive, the working files can be stored permanantly. After executing the following code block, your google drive will be mounted at `/content/drive`"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "FyrDicHSYlb2"
},
"source": [
"![](https://i.imgur.com/IbMf5Tg.png)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 347
},
"id": "BmvzTF5IJ6TL",
"outputId": "84238034-a652-4f6d-dadf-421199e4f9a6"
},
"outputs": [
{
"ename": "MessageError",
"evalue": "ignored",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mMessageError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-17-d5df0069828e>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mgoogle\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcolab\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mdrive\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mdrive\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmount\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'/content/drive'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/google/colab/drive.py\u001b[0m in \u001b[0;36mmount\u001b[0;34m(mountpoint, force_remount, timeout_ms, use_metadata_server)\u001b[0m\n\u001b[1;32m 116\u001b[0m \u001b[0mtimeout_ms\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mtimeout_ms\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 117\u001b[0m \u001b[0muse_metadata_server\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0muse_metadata_server\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 118\u001b[0;31m ephemeral=ephemeral)\n\u001b[0m\u001b[1;32m 119\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 120\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/google/colab/drive.py\u001b[0m in \u001b[0;36m_mount\u001b[0;34m(mountpoint, force_remount, timeout_ms, use_metadata_server, ephemeral)\u001b[0m\n\u001b[1;32m 139\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mephemeral\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 140\u001b[0m _message.blocking_request(\n\u001b[0;32m--> 141\u001b[0;31m 'request_auth', request={'authType': 'dfs_ephemeral'}, timeout_sec=None)\n\u001b[0m\u001b[1;32m 142\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 143\u001b[0m \u001b[0mmountpoint\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_os\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpath\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mexpanduser\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mmountpoint\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/google/colab/_message.py\u001b[0m in \u001b[0;36mblocking_request\u001b[0;34m(request_type, request, timeout_sec, parent)\u001b[0m\n\u001b[1;32m 173\u001b[0m request_id = send_request(\n\u001b[1;32m 174\u001b[0m request_type, request, parent=parent, expect_reply=True)\n\u001b[0;32m--> 175\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mread_reply_from_input\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest_id\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtimeout_sec\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/google/colab/_message.py\u001b[0m in \u001b[0;36mread_reply_from_input\u001b[0;34m(message_id, timeout_sec)\u001b[0m\n\u001b[1;32m 104\u001b[0m reply.get('colab_msg_id') == message_id):\n\u001b[1;32m 105\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0;34m'error'\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mreply\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 106\u001b[0;31m \u001b[0;32mraise\u001b[0m \u001b[0mMessageError\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mreply\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'error'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 107\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mreply\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'data'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 108\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mMessageError\u001b[0m: Error: credential propagation was unsuccessful"
]
}
],
"source": [
"from google.colab import drive\n",
"drive.mount('/content/drive')"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "AkmayCmGMD03"
},
"source": [
"After mounting the drive, the content of the google drive will be mounted on a directory named `MyDrive`\n",
"\n",
"![](https://i.imgur.com/jDtI10Cm.png)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "UhKhwipoMvXF"
},
"source": [
"After mounting the drive, all the changes will be synced with the google drive.\n",
"Since models could be quite large, make sure that your google drive has enough space."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "UT0TEPRS7KF6",
"outputId": "291edb11-1341-405b-ec74-d915976ff90c"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[Errno 2] No such file or directory: '/content/drive/MyDrive'\n",
"/content\n",
"/content/ML2022\n"
]
}
],
"source": [
"%cd /content/drive/MyDrive \n",
"#change directory to google drive\n",
"!mkdir ML2022 #make a directory named ML2022\n",
"%cd ./ML2022 \n",
"#change directory to ML2022"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "Oj13Q58QerAx"
},
"source": [
"Use bash command pwd to output the current directory"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "-S8l1-ReepkS",
"outputId": "051dc816-95b9-4922-bab9-b0c40b3d7909"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"/content/ML2022\n"
]
}
],
"source": [
"!pwd #output the current directory"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qSSvrDaBiDrP"
},
"source": [
"Repeat the downloading process, this time, the file will be stored permanently in your google drive."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/"
},
"id": "b39YMYicASvP",
"outputId": "0b08768d-2154-4a6a-d03c-e04f966f87ed"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading...\n",
"From: https://drive.google.com/uc?id=1sUr1x-GhJ_80vIGzVGEqFUSDYfwV50YW\n",
"To: /content/ML2022/pikachu.png\n",
"\r 0% 0.00/204k [00:00<?, ?B/s]\r100% 204k/204k [00:00<00:00, 82.1MB/s]\n"
]
}
],
"source": [
"!gdown --id '1sUr1x-GhJ_80vIGzVGEqFUSDYfwV50YW' --output pikachu.png"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BRTF7yxTY-aE"
},
"source": [
"Check the file structure\n",
"\n",
"![](https://i.imgur.com/DbligmOt.png)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "D0URgikZXl5I"
},
"source": [
"For all the homeworks, the data can be downloaded and stored similar as demonstrated in this notebook. "
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "Google Colab Tutorial 2022",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

2022 ML/01 Introductionof Deep Learning/Colab Tutorial 2022.pdf → 2022 ML/00 Colab&Pytorch/Google_Colab_Tutorial.pdf View File


2022 ML/01 Introductionof Deep Learning/Pytorch Tutorial 1.pdf → 2022 ML/00 Colab&Pytorch/Pytorch_Tutorial_1.pdf View File


+ 912
- 0
2022 ML/00 Colab&Pytorch/Pytorch_Tutorial_2.ipynb View File

@@ -0,0 +1,912 @@
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"name": "Pytorch Tutorial Colab example",
"provenance": [],
"collapsed_sections": []
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"accelerator": "GPU"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "tHILOGjOQbsQ"
},
"source": [
"# **Pytorch Tutorial 2**\n",
"Video: https://youtu.be/VbqNn20FoHM"
]
},
{
"cell_type": "code",
"metadata": {
"id": "C1zA7GupxdJv"
},
"source": [
"import torch"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "6Eqj90EkWbWx"
},
"source": [
"**1. Pytorch Documentation Explanation with torch.max**\n",
"\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "JCXOg-iSQuk7",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "957c8de4-d306-4533-f69e-fe285a6409df"
},
"source": [
"x = torch.randn(4,5)\n",
"y = torch.randn(4,5)\n",
"z = torch.randn(4,5)\n",
"print(x)\n",
"print(y)\n",
"print(z)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor([[ 0.5700, -0.5425, -0.5726, -1.0554, -1.2836],\n",
" [-1.0144, 0.8137, 1.7094, 0.6248, 0.3325],\n",
" [ 1.0856, 0.2572, -0.6015, 0.4504, -0.8093],\n",
" [-0.9323, -0.4973, -1.5003, 0.6611, 0.8620]])\n",
"tensor([[ 1.0339, -0.4076, 0.7701, 1.4776, -0.5398],\n",
" [-0.4112, 0.7838, 0.4770, 0.4791, -0.4028],\n",
" [-0.8085, -1.0560, 0.0614, 0.0789, -1.1773],\n",
" [-0.6305, -0.5189, 0.1551, 0.0938, -1.0175]])\n",
"tensor([[ 0.1980, -0.9233, -1.4898, 1.3691, 0.8554],\n",
" [-1.2373, 0.0323, 0.3434, 0.3969, 1.6149],\n",
" [-0.9932, -1.5508, 1.8088, 0.0051, -1.0612],\n",
" [ 0.1128, -0.2045, -0.1560, 0.8429, -0.3653]])\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "EEqa9GFoWF78",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "4f38ae34-5b7e-4b12-e03a-ed6bb2602f17"
},
"source": [
"# 1. max of entire tensor (torch.max(input) → Tensor)\n",
"m = torch.max(x)\n",
"print(m)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor(1.7094)\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "wffThGDyWKxJ",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "eba5d365-75b5-468d-b088-b72b3c0421a8"
},
"source": [
"# 2. max along a dimension (torch.max(input, dim, keepdim=False, *, out=None) → (Tensor, LongTensor))\n",
"m, idx = torch.max(x,0)\n",
"print(m)\n",
"print(idx)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor([1.0856, 0.8137, 1.7094, 0.6611, 0.8620])\n",
"tensor([2, 1, 1, 3, 3])\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "oKDQW3tIXKg-",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "49cd8a9b-9e9c-4cb6-c8cb-9ae122e845ca"
},
"source": [
"# 2-2\n",
"m, idx = torch.max(input=x,dim=0)\n",
"print(m)\n",
"print(idx)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor([1.0856, 0.8137, 1.7094, 0.6611, 0.8620])\n",
"tensor([2, 1, 1, 3, 3])\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "6QZ6WRLyX3De",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "5f9dc0bd-a53b-4ea2-9364-5e253c18f0f2"
},
"source": [
"# 2-3\n",
"m, idx = torch.max(x,0,False)\n",
"print(m)\n",
"print(idx)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor([1.0856, 0.8137, 1.7094, 0.6611, 0.8620])\n",
"tensor([2, 1, 1, 3, 3])\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "nqGuctkKbUEn",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "502eb14b-8253-4e69-e12f-90a37d2d3068"
},
"source": [
"# 2-4\n",
"m, idx = torch.max(x,dim=0,keepdim=True)\n",
"print(m)\n",
"print(idx)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor([[1.0856, 0.8137, 1.7094, 0.6611, 0.8620]])\n",
"tensor([[2, 1, 1, 3, 3]])\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "9OMzxuMlZPIu",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "e287ebbf-cb95-4ef8-bc74-d3436e4316b5"
},
"source": [
"# 2-5\n",
"p = (m,idx)\n",
"torch.max(x,0,False,out=p)\n",
"print(p[0])\n",
"print(p[1])\n"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor([1.0856, 0.8137, 1.7094, 0.6611, 0.8620])\n",
"tensor([2, 1, 1, 3, 3])\n"
]
},
{
"output_type": "stream",
"name": "stderr",
"text": [
"/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:3: UserWarning: An output with one or more elements was resized since it had shape [1, 1, 5], which does not match the required output shape [1, 5].This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at ../aten/src/ATen/native/Resize.cpp:23.)\n",
" This is separate from the ipykernel package so we can avoid doing imports until\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "uhd4TqGTbD2c",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 333
},
"outputId": "be65fed5-2a38-40e7-f26e-9edf30b97654"
},
"source": [
"# 2-6\n",
"p = (m,idx)\n",
"torch.max(x,0,False,p)\n",
"print(p[0])\n",
"print(p[1])"
],
"execution_count": null,
"outputs": [
{
"output_type": "error",
"ename": "TypeError",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-9-07a6e420b81d>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m# 2-6\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0mp\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0mm\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0midx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3\u001b[0;31m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmax\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mp\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 4\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mp\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mp\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mTypeError\u001b[0m: max() received an invalid combination of arguments - got (Tensor, int, bool, tuple), but expected one of:\n * (Tensor input)\n * (Tensor input, Tensor other, *, Tensor out)\n didn't match because some of the arguments have invalid types: (Tensor, !int!, !bool!, !tuple!)\n * (Tensor input, int dim, bool keepdim, *, tuple of Tensors out)\n * (Tensor input, name dim, bool keepdim, *, tuple of Tensors out)\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "wbxjUSOXxN0n",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 262
},
"outputId": "aaff3239-eafe-4c31-e3d3-5bc343c88397"
},
"source": [
"# 2-7\n",
"m, idx = torch.max(x,True)"
],
"execution_count": null,
"outputs": [
{
"output_type": "error",
"ename": "TypeError",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-10-366ecd7d16b3>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m# 2-7\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mm\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0midx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmax\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;31mTypeError\u001b[0m: max() received an invalid combination of arguments - got (Tensor, bool), but expected one of:\n * (Tensor input)\n * (Tensor input, Tensor other, *, Tensor out)\n * (Tensor input, int dim, bool keepdim, *, tuple of Tensors out)\n * (Tensor input, name dim, bool keepdim, *, tuple of Tensors out)\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "iMwhGLlGWYaR",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "f712e674-21f4-45de-958c-fb0b7e76dfa3"
},
"source": [
"# 3. max(choose max) operators on two tensors (torch.max(input, other, *, out=None) → Tensor)\n",
"t = torch.max(x,y)\n",
"print(t)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor([[ 1.0339, -0.4076, 0.7701, 1.4776, -0.5398],\n",
" [-0.4112, 0.8137, 1.7094, 0.6248, 0.3325],\n",
" [ 1.0856, 0.2572, 0.0614, 0.4504, -0.8093],\n",
" [-0.6305, -0.4973, 0.1551, 0.6611, 0.8620]])\n"
]
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nFxRKu2Dedwb"
},
"source": [
"**2. Common errors**\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "KMcRyMxGwhul"
},
"source": [
"The following code blocks show some common errors while using the torch library. First, execute the code with error, and then execute the next code block to fix the error. You need to change the runtime to GPU.\n"
]
},
{
"cell_type": "code",
"metadata": {
"id": "eX-kKdi6ynFf"
},
"source": [
"import torch"
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"metadata": {
"id": "-muJ4KKreoP2",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 375
},
"outputId": "a663e92d-63f5-4a1a-fea8-45badaf17020"
},
"source": [
"# 1. different device error\n",
"model = torch.nn.Linear(5,1).to(\"cuda:0\")\n",
"x = torch.randn(5).to(\"cpu\")\n",
"y = model(x)"
],
"execution_count": null,
"outputs": [
{
"output_type": "error",
"ename": "RuntimeError",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mRuntimeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-13-a5238fdc1590>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0mmodel\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"cuda:0\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrandn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"cpu\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 4\u001b[0;31m \u001b[0my\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodel\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m_call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 1100\u001b[0m if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n\u001b[1;32m 1101\u001b[0m or _global_forward_hooks or _global_forward_pre_hooks):\n\u001b[0;32m-> 1102\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mforward_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1103\u001b[0m \u001b[0;31m# Do not call functions when jit is used\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1104\u001b[0m \u001b[0mfull_backward_hooks\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnon_full_backward_hooks\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m 101\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 102\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mTensor\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mTensor\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 103\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mF\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mweight\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbias\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 104\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 105\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mextra_repr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mstr\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py\u001b[0m in \u001b[0;36mlinear\u001b[0;34m(input, weight, bias)\u001b[0m\n\u001b[1;32m 1846\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mhas_torch_function_variadic\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mweight\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbias\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1847\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mhandle_torch_function\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mlinear\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mweight\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbias\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mweight\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbias\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mbias\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1848\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_C\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_nn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mweight\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbias\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1849\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1850\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mRuntimeError\u001b[0m: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat2 in method wrapper_mm)"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "a54PqxJLe9-c",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "59d31dd0-bb14-4741-b274-45960e884fd1"
},
"source": [
"# 1. different device error (fixed)\n",
"x = torch.randn(5).to(\"cuda:0\")\n",
"y = model(x)\n",
"print(y.shape)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"torch.Size([1])\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "n7OHtZwbi7Qw",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 208
},
"outputId": "4afb4e0c-6477-496c-fe57-edd8452ee3cd"
},
"source": [
"# 2. mismatched dimensions error 1\n",
"x = torch.randn(4,5)\n",
"y = torch.randn(5,4)\n",
"z = x + y"
],
"execution_count": null,
"outputs": [
{
"output_type": "error",
"ename": "RuntimeError",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mRuntimeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-15-912d8d278c61>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrandn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0my\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrandn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 4\u001b[0;31m \u001b[0mz\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0my\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;31mRuntimeError\u001b[0m: The size of tensor a (5) must match the size of tensor b (4) at non-singleton dimension 1"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "qVynzvrskFCD",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "fd2f5918-82e9-4376-bc2e-38a00c2a3169"
},
"source": [
"# 2. mismatched dimensions error 1 (fixed by transpose)\n",
"y = y.transpose(0,1)\n",
"z = x + y\n",
"print(z.shape)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"torch.Size([4, 5])\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "Hgzgb9gJANod",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 411
},
"outputId": "15e70cb4-94ec-4b93-e9d0-cd56a01091a9"
},
"source": [
"# 3. cuda out of memory error\n",
"import torch\n",
"import torchvision.models as models\n",
"resnet18 = models.resnet18().to(\"cuda:0\") # Neural Networks for Image Recognition\n",
"data = torch.randn(2048,3,244,244) # Create fake data (512 images)\n",
"out = resnet18(data.to(\"cuda:0\")) # Use Data as Input and Feed to Model\n",
"print(out.shape)\n"
],
"execution_count": null,
"outputs": [
{
"output_type": "error",
"ename": "RuntimeError",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mRuntimeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-17-711923c7f347>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mresnet18\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mmodels\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mresnet18\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"cuda:0\"\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# Neural Networks for Image Recognition\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mdata\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrandn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m2048\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m244\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m244\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# Create fake data (512 images)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 6\u001b[0;31m \u001b[0mout\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mresnet18\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mdata\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"cuda:0\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# Use Data as Input and Feed to Model\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 7\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mout\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mshape\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m_call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 1100\u001b[0m if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n\u001b[1;32m 1101\u001b[0m or _global_forward_hooks or _global_forward_pre_hooks):\n\u001b[0;32m-> 1102\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mforward_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1103\u001b[0m \u001b[0;31m# Do not call functions when jit is used\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1104\u001b[0m \u001b[0mfull_backward_hooks\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnon_full_backward_hooks\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, x)\u001b[0m\n\u001b[1;32m 247\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 248\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mTensor\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m->\u001b[0m \u001b[0mTensor\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 249\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_forward_impl\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 250\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 251\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torchvision/models/resnet.py\u001b[0m in \u001b[0;36m_forward_impl\u001b[0;34m(self, x)\u001b[0m\n\u001b[1;32m 231\u001b[0m \u001b[0;31m# See note [TorchScript super()]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 232\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mconv1\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 233\u001b[0;31m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbn1\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 234\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrelu\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 235\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmaxpool\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m_call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 1100\u001b[0m if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n\u001b[1;32m 1101\u001b[0m or _global_forward_hooks or _global_forward_pre_hooks):\n\u001b[0;32m-> 1102\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mforward_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1103\u001b[0m \u001b[0;31m# Do not call functions when jit is used\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1104\u001b[0m \u001b[0mfull_backward_hooks\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnon_full_backward_hooks\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/modules/batchnorm.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input)\u001b[0m\n\u001b[1;32m 177\u001b[0m \u001b[0mbn_training\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 178\u001b[0m \u001b[0mexponential_average_factor\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 179\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0meps\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 180\u001b[0m )\n\u001b[1;32m 181\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py\u001b[0m in \u001b[0;36mbatch_norm\u001b[0;34m(input, running_mean, running_var, weight, bias, training, momentum, eps)\u001b[0m\n\u001b[1;32m 2281\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2282\u001b[0m return torch.batch_norm(\n\u001b[0;32m-> 2283\u001b[0;31m \u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mweight\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mbias\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrunning_mean\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mrunning_var\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtraining\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mmomentum\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0meps\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mbackends\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcudnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0menabled\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2284\u001b[0m )\n\u001b[1;32m 2285\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mRuntimeError\u001b[0m: CUDA out of memory. Tried to allocate 7.27 GiB (GPU 0; 11.17 GiB total capacity; 8.67 GiB already allocated; 1.96 GiB free; 8.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "VPksKnB_w343",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "2566b695-61dd-4ade-b472-0957db36a94b"
},
"source": [
"# 3. cuda out of memory error (fixed, but it might take some time to execute)\n",
"for d in data:\n",
" out = resnet18(d.to(\"cuda:0\").unsqueeze(0))\n",
"print(out.shape)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"torch.Size([1, 1000])\n"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "vqszlxEE0Bk0",
"colab": {
"base_uri": "https://localhost:8080/",
"height": 375
},
"outputId": "b5e17e21-80dc-461c-a31e-c5eabccf1431"
},
"source": [
"# 4. mismatched tensor type\n",
"import torch.nn as nn\n",
"L = nn.CrossEntropyLoss()\n",
"outs = torch.randn(5,5)\n",
"labels = torch.Tensor([1,2,3,4,0])\n",
"lossval = L(outs,labels) # Calculate CrossEntropyLoss between outs and labels"
],
"execution_count": null,
"outputs": [
{
"output_type": "error",
"ename": "RuntimeError",
"evalue": "ignored",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mRuntimeError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-21-60a5d1aad216>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m()\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mouts\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mrandn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mlabels\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTensor\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m3\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 6\u001b[0;31m \u001b[0mlossval\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mL\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mouts\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0mlabels\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;31m# Calculate CrossEntropyLoss between outs and labels\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py\u001b[0m in \u001b[0;36m_call_impl\u001b[0;34m(self, *input, **kwargs)\u001b[0m\n\u001b[1;32m 1100\u001b[0m if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks\n\u001b[1;32m 1101\u001b[0m or _global_forward_hooks or _global_forward_pre_hooks):\n\u001b[0;32m-> 1102\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mforward_call\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1103\u001b[0m \u001b[0;31m# Do not call functions when jit is used\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1104\u001b[0m \u001b[0mfull_backward_hooks\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mnon_full_backward_hooks\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/modules/loss.py\u001b[0m in \u001b[0;36mforward\u001b[0;34m(self, input, target)\u001b[0m\n\u001b[1;32m 1150\u001b[0m return F.cross_entropy(input, target, weight=self.weight,\n\u001b[1;32m 1151\u001b[0m \u001b[0mignore_index\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mignore_index\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreduction\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreduction\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1152\u001b[0;31m label_smoothing=self.label_smoothing)\n\u001b[0m\u001b[1;32m 1153\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1154\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py\u001b[0m in \u001b[0;36mcross_entropy\u001b[0;34m(input, target, weight, size_average, ignore_index, reduce, reduction, label_smoothing)\u001b[0m\n\u001b[1;32m 2844\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0msize_average\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0mreduce\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2845\u001b[0m \u001b[0mreduction\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0m_Reduction\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlegacy_get_string\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msize_average\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mreduce\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 2846\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_C\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_nn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcross_entropy_loss\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0minput\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mtarget\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mweight\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0m_Reduction\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mget_enum\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mreduction\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mignore_index\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mlabel_smoothing\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2847\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2848\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mRuntimeError\u001b[0m: expected scalar type Long but found Float"
]
}
]
},
{
"cell_type": "code",
"metadata": {
"id": "CZwgwup_1dgS",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "cdc614fc-b533-4f4d-ce39-56f55a0d942c"
},
"source": [
"# 4. mismatched tensor type (fixed)\n",
"labels = labels.long()\n",
"lossval = L(outs,labels)\n",
"print(lossval)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"tensor(2.0054)\n"
]
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "dSuNdA8F06dK"
},
"source": [
"**3. More on dataset and dataloader**\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "in84z_xu1rE6"
},
"source": [
"A dataset is a cluster of data in a organized way. A dataloader is a loader which can iterate through the data set."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "34zfh-c22Qqs"
},
"source": [
"Let a dataset be the English alphabets \"abcdefghijklmnopqrstuvwxyz\""
]
},
{
"cell_type": "code",
"metadata": {
"id": "TaiHofty1qKA"
},
"source": [
"dataset = \"abcdefghijklmnopqrstuvwxyz\""
],
"execution_count": null,
"outputs": []
},
{
"cell_type": "markdown",
"metadata": {
"id": "h0jwhVa12h3a"
},
"source": [
"A simple dataloader could be implemented with the python code \"for\""
]
},
{
"cell_type": "code",
"metadata": {
"id": "bWC5Wwbv2egy",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "fb5ca312-ccde-476b-9ee9-0c3d9d0ce9ed"
},
"source": [
"for datapoint in dataset:\n",
" print(datapoint)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"a\n",
"b\n",
"c\n",
"d\n",
"e\n",
"f\n",
"g\n",
"h\n",
"i\n",
"j\n",
"k\n",
"l\n",
"m\n",
"n\n",
"o\n",
"p\n",
"q\n",
"r\n",
"s\n",
"t\n",
"u\n",
"v\n",
"w\n",
"x\n",
"y\n",
"z\n"
]
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "n33VKzkG2y2U"
},
"source": [
"When using the dataloader, we often like to shuffle the data. This is where torch.utils.data.DataLoader comes in handy. If each data is an index (0,1,2...) from the view of torch.utils.data.DataLoader, shuffling can simply be done by shuffling an index array. \n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "9MXUUKQ65APf"
},
"source": [
"torch.utils.data.DataLoader will need two imformation to fulfill its role. First, it needs to know the length of the data. Second, once torch.utils.data.DataLoader outputs the index of the shuffling results, the dataset needs to return the corresponding data."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BV5txsjK5j4j"
},
"source": [
"Therefore, torch.utils.data.Dataset provides the imformation by two functions, `__len__()` and `__getitem__()` to support torch.utils.data.Dataloader"
]
},
{
"cell_type": "code",
"metadata": {
"id": "A0IEkemJ5ajD",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "0f868f2f-40c5-46ea-ec85-e3ec6d05c2f2"
},
"source": [
"import torch\n",
"import torch.utils.data \n",
"class ExampleDataset(torch.utils.data.Dataset):\n",
" def __init__(self):\n",
" self.data = \"abcdefghijklmnopqrstuvwxyz\"\n",
" \n",
" def __getitem__(self,idx): # if the index is idx, what will be the data?\n",
" return self.data[idx]\n",
" \n",
" def __len__(self): # What is the length of the dataset\n",
" return len(self.data)\n",
"\n",
"dataset1 = ExampleDataset() # create the dataset\n",
"dataloader = torch.utils.data.DataLoader(\n",
" dataset = dataset1, \n",
" shuffle = True, \n",
" batch_size = 1\n",
" )\n",
"for datapoint in dataloader:\n",
" print(datapoint)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"['s']\n",
"['g']\n",
"['b']\n",
"['m']\n",
"['r']\n",
"['u']\n",
"['y']\n",
"['e']\n",
"['n']\n",
"['c']\n",
"['h']\n",
"['x']\n",
"['w']\n",
"['a']\n",
"['l']\n",
"['k']\n",
"['o']\n",
"['z']\n",
"['q']\n",
"['j']\n",
"['v']\n",
"['d']\n",
"['f']\n",
"['i']\n",
"['p']\n",
"['t']\n"
]
}
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nTt-ZTid9S2n"
},
"source": [
"A simple data augmentation technique can be done by changing the code in `__len__()` and `__getitem__()`. Suppose we want to double the length of the dataset by adding in the uppercase letters, using only the lowercase dataset, you can change the dataset to the following."
]
},
{
"cell_type": "code",
"metadata": {
"id": "7Wn3BA2j-NXl",
"colab": {
"base_uri": "https://localhost:8080/"
},
"outputId": "09f8509f-f53f-4ebd-aedd-7691fcd51ec4"
},
"source": [
"import torch.utils.data \n",
"class ExampleDataset(torch.utils.data.Dataset):\n",
" def __init__(self):\n",
" self.data = \"abcdefghijklmnopqrstuvwxyz\"\n",
" \n",
" def __getitem__(self,idx): # if the index is idx, what will be the data?\n",
" if idx >= len(self.data): # if the index >= 26, return upper case letter\n",
" return self.data[idx%26].upper()\n",
" else: # if the index < 26, return lower case, return lower case letter\n",
" return self.data[idx]\n",
" \n",
" def __len__(self): # What is the length of the dataset\n",
" return 2 * len(self.data) # The length is now twice as large\n",
"\n",
"dataset1 = ExampleDataset() # create the dataset\n",
"dataloader = torch.utils.data.DataLoader(dataset = dataset1,shuffle = True,batch_size = 1)\n",
"for datapoint in dataloader:\n",
" print(datapoint)"
],
"execution_count": null,
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"['S']\n",
"['Q']\n",
"['U']\n",
"['a']\n",
"['t']\n",
"['z']\n",
"['i']\n",
"['g']\n",
"['W']\n",
"['c']\n",
"['k']\n",
"['b']\n",
"['w']\n",
"['y']\n",
"['v']\n",
"['N']\n",
"['p']\n",
"['q']\n",
"['R']\n",
"['E']\n",
"['u']\n",
"['G']\n",
"['Y']\n",
"['K']\n",
"['P']\n",
"['l']\n",
"['m']\n",
"['B']\n",
"['C']\n",
"['F']\n",
"['d']\n",
"['Z']\n",
"['I']\n",
"['n']\n",
"['T']\n",
"['M']\n",
"['x']\n",
"['f']\n",
"['L']\n",
"['o']\n",
"['V']\n",
"['O']\n",
"['s']\n",
"['e']\n",
"['A']\n",
"['r']\n",
"['J']\n",
"['X']\n",
"['j']\n",
"['h']\n",
"['D']\n",
"['H']\n"
]
}
]
}
]
}

2022 ML/01 Introductionof Deep Learning/Pytorch Tutorial 2.pdf → 2022 ML/00 Colab&Pytorch/Pytorch_Tutorial_2.pdf View File


Loading…
Cancel
Save