Browse Source

Improve PyTorch lecture

savefigrue
bushuhui 4 years ago
parent
commit
360856ceac
29 changed files with 1146 additions and 1688 deletions
  1. +2
    -2
      5_nn/2-mlp_bp.ipynb
  2. +25
    -27
      6_pytorch/1-tensor.ipynb
  3. +23
    -29
      6_pytorch/2-autograd.ipynb
  4. +46
    -80
      6_pytorch/3-linear-regression.ipynb
  5. +121
    -215
      6_pytorch/4-logistic-regression.ipynb
  6. +0
    -693
      6_pytorch/5-deep-nn.ipynb
  7. +149
    -434
      6_pytorch/5-nn-sequential-module.ipynb
  8. +671
    -0
      6_pytorch/6-deep-nn.ipynb
  9. +9
    -13
      6_pytorch/7-param_initialize.ipynb
  10. BIN
      6_pytorch/imgs/MNIST.jpeg
  11. BIN
      6_pytorch/imgs/logistic_function.png
  12. BIN
      6_pytorch/imgs/softmax.jpeg
  13. +4
    -102
      6_pytorch/optimizer/6_1-sgd.ipynb
  14. +2
    -2
      6_pytorch/optimizer/6_2-momentum.ipynb
  15. +2
    -2
      6_pytorch/optimizer/6_3-adagrad.ipynb
  16. +2
    -2
      6_pytorch/optimizer/6_4-rmsprop.ipynb
  17. +2
    -2
      6_pytorch/optimizer/6_5-adadelta.ipynb
  18. +2
    -2
      6_pytorch/optimizer/6_6-adam.ipynb
  19. BIN
      7_deep_learning/imgs/ResNet.png
  20. BIN
      7_deep_learning/imgs/lena.png
  21. BIN
      7_deep_learning/imgs/lena3.png
  22. BIN
      7_deep_learning/imgs/lena512.png
  23. BIN
      7_deep_learning/imgs/nn_lenet.png
  24. BIN
      7_deep_learning/imgs/residual.png
  25. BIN
      7_deep_learning/imgs/resnet1.png
  26. +2
    -0
      7_deep_learning/imgs/tensor_data_structure.svg
  27. BIN
      7_deep_learning/imgs/trans.bkp.PNG
  28. +9
    -11
      README.md
  29. +75
    -72
      README_ENG.md

+ 2
- 2
5_nn/2-mlp_bp.ipynb View File

@@ -1011,7 +1011,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -1025,7 +1025,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.9"
"version": "3.9.7"
}
},
"nbformat": 4,


+ 25
- 27
6_pytorch/1-tensor.ipynb View File

@@ -4,14 +4,17 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"# Tensor and Variable\n",
"# PyTorch\n",
"\n",
"PyTorch是基于Python的科学计算包,其旨在服务两类场合:\n",
"* 替代NumPy发挥GPU潜能\n",
"* 提供了高度灵活性和效率的深度学习平台\n",
"\n",
"PyTorch的简洁设计使得它入门很简单,本部分内容在深入介绍PyTorch之前,先介绍一些PyTorch的基础知识,让大家能够对PyTorch有一个大致的了解,并能够用PyTorch搭建一个简单的神经网络,然后在深入学习如何使用PyTorch实现各类网络结构。在学习过程,可能部分内容暂时不太理解,可先不予以深究,后续的课程将会对此进行深入讲解。\n",
"\n",
"张量(Tensor)是一种专门的数据结构,非常类似于数组和矩阵。在PyTorch中,我们使用张量来编码模型的输入和输出,以及模型的参数。\n",
"\n",
"张量类似于`NumPy`的`ndarray`,不同之处在于张量可以在GPU或其他硬件加速器上运行。事实上,张量和NumPy数组通常可以共享相同的底层内存,从而消除了复制数据的需要(请参阅使用NumPy的桥接)。张量还针对自动微分进行了优化,在Autograd部分中看到更多关于这一点的内介绍。\n",
"\n",
"`variable`是一种可以不断变化的变量,符合反向传播,参数更新的属性。PyTorch的`variable`是一个存放会变化值的内存位置,里面的值会不停变化,像装糖果(糖果就是数据,即tensor)的盒子,糖果的数量不断变化。pytorch都是由tensor计算的,而tensor里面的参数是variable形式。\n"
"![PyTorch Demo](imgs/PyTorch.png)\n"
]
},
{
@@ -20,6 +23,12 @@
"source": [
"## 1. Tensor基本用法\n",
"\n",
"张量(Tensor)是一种专门的数据结构,非常类似于数组和矩阵。在PyTorch中,我们使用张量来编码模型的输入和输出,以及模型的参数。\n",
"\n",
"张量类似于`NumPy`的`ndarray`,不同之处在于张量可以在GPU或其他硬件加速器上运行。事实上,张量和NumPy数组通常可以共享相同的底层内存,从而消除了复制数据的需要(请参阅使用NumPy的桥接)。张量还针对自动微分进行了优化,在Autograd部分中看到更多关于这一点的内介绍。\n",
"\n",
"`variable`是一种可以不断变化的变量,符合反向传播,参数更新的属性。PyTorch的`variable`是一个存放会变化值的内存位置,里面的值会不停变化,像装糖果(糖果就是数据,即tensor)的盒子,糖果的数量不断变化。pytorch都是由tensor计算的,而tensor里面的参数是variable形式。\n",
"\n",
"PyTorch基础的数据是张量(Tensor),PyTorch 的很多操作好 NumPy 都是类似的,但是因为其能够在 GPU 上运行,所以有着比 NumPy 快很多倍的速度。本节内容主要包括 PyTorch 中的基本元素 Tensor 和 Variable 及其操作方式。"
]
},
@@ -32,10 +41,8 @@
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
@@ -44,10 +51,8 @@
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# 创建一个 numpy ndarray\n",
@@ -63,13 +68,11 @@
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": true
},
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"pytorch_tensor1 = torch.Tensor(numpy_tensor)\n",
"pytorch_tensor1 = torch.tensor(numpy_tensor)\n",
"pytorch_tensor2 = torch.from_numpy(numpy_tensor)"
]
},
@@ -96,10 +99,8 @@
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": true
},
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# 如果 pytorch tensor 在 cpu 上\n",
@@ -128,9 +129,7 @@
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"# 第一种方式是定义 cuda 数据类型\n",
@@ -161,9 +160,7 @@
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"cpu_tensor = gpu_tensor.cpu()"
@@ -697,6 +694,7 @@
"metadata": {},
"source": [
"## 参考\n",
"* [PyTorch官方说明文档](https://pytorch.org/docs/stable/)\n",
"* http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html\n",
"* http://cs231n.github.io/python-numpy-tutorial/"
]


+ 23
- 29
6_pytorch/2-autograd.ipynb View File

@@ -15,16 +15,7 @@
"\n",
"从 PyTorch 0.4版本起, `Variable` 正式合并入 `Tensor` 类,通过 `Variable` 嵌套实现的自动微分功能已经整合进入了 `Tensor` 类中。虽然为了的兼容性还是可以使用 `Variable`(tensor)这种方式进行嵌套,但是这个操作其实什么都没做。\n",
"\n",
"以后的代码建议直接使用 `Tensor` 类进行操作,因为官方文档中已经将 `Variable` 设置成过期模块。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import torch"
"**以后的代码建议直接使用 `Tensor` 类进行操作,因为官方文档中已经将 `Variable` 设置成过期模块。**"
]
},
{
@@ -32,12 +23,13 @@
"metadata": {},
"source": [
"## 1. 简单情况的自动求导\n",
"下面我们显示一些简单情况的自动求导,\"简单\"体现在计算的结果都是标量,也就是一个数,我们对这个标量进行自动求导。"
"\n",
"下面展示一些简单情况的自动求导,\"简单\"体现在计算的结果都是标量,也就是一个数,对这个标量进行自动求导。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 1,
"metadata": {},
"outputs": [
{
@@ -49,6 +41,8 @@
}
],
"source": [
"import torch\n",
"\n",
"x = torch.tensor([2.0], requires_grad=True)\n",
"y = x + 2\n",
"z = y ** 2 + 3\n",
@@ -65,18 +59,18 @@
"z = (x + 2)^2 + 3\n",
"$$\n",
"\n",
"那么我们从 z 对 x 求导的结果就是 \n",
"那么我们从 $z$$x$ (当$x=2$)求导的结果就是 \n",
"\n",
"$$\n",
"\\frac{\\partial z}{\\partial x} = 2 (x + 2) = 2 (2 + 2) = 8\n",
"$$\n",
"\n",
"如果对求导不熟悉,可以查看以下[《导数介绍资料》](https://baike.baidu.com/item/%E5%AF%BC%E6%95%B0#1)网址进行复习"
">如果对求导不熟悉,可以查看[《导数介绍资料》](https://baike.baidu.com/item/%E5%AF%BC%E6%95%B0#1)进行复习"
]
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 2,
"metadata": {},
"outputs": [
{
@@ -97,12 +91,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"对于上面这样一个简单的例子,我们验证了自动求导,同时可以发现发现使用自动求导非常方便。如果是一个更加复杂的例子,那么手动求导就会显得非常的麻烦,所以自动求导的机制能够帮助我们省去麻烦的数学计算,下面我们可以看一个更加复杂的例子。"
"上面简单的例子验证了自动求导的功能,可以发现使用自动求导非常方便,不需要关系中间变量的状态。如果是一个更加复杂的例子,那么手动求导有可能非常的麻烦,所以自动求导的机制能够帮助我们省去繁琐的数学公式推导,下面给出一个更加复杂的例子。"
]
},
{
"cell_type": "code",
"execution_count": 8,
"execution_count": 3,
"metadata": {},
"outputs": [
{
@@ -124,7 +118,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
@@ -136,12 +130,12 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"如果对矩阵乘法不熟悉,可以查看下面的[《矩阵乘法说明》](https://baike.baidu.com/item/%E7%9F%A9%E9%98%B5%E4%B9%98%E6%B3%95/5446029?fr=aladdin)进行复习"
"> 如果对矩阵乘法不熟悉,可以查看[《矩阵乘法说明》](https://baike.baidu.com/item/%E7%9F%A9%E9%98%B5%E4%B9%98%E6%B3%95/5446029?fr=aladdin)进行复习"
]
},
{
"cell_type": "code",
"execution_count": 10,
"execution_count": 5,
"metadata": {},
"outputs": [
{
@@ -196,12 +190,12 @@
"source": [
"## 2. 复杂情况的自动求导\n",
"\n",
"上面我们展示了简单情况下的自动求导,都是对标量进行自动求导,那么如何对一个向量或者矩阵自动求导?"
"上面展示了简单情况下的自动求导,都是对标量进行自动求导,那么如何对一个向量或者矩阵自动求导?"
]
},
{
"cell_type": "code",
"execution_count": 12,
"execution_count": 6,
"metadata": {},
"outputs": [
{
@@ -222,7 +216,7 @@
},
{
"cell_type": "code",
"execution_count": 13,
"execution_count": 7,
"metadata": {},
"outputs": [
{
@@ -280,7 +274,7 @@
},
{
"cell_type": "code",
"execution_count": 14,
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
@@ -289,7 +283,7 @@
},
{
"cell_type": "code",
"execution_count": 15,
"execution_count": 9,
"metadata": {},
"outputs": [
{
@@ -446,7 +440,7 @@
"k = (k_0,\\ k_1) = (x_0^2 + 3 x_1,\\ 2 x_0 + x_1^2)\n",
"$$\n",
"\n",
"我们希望求得\n",
"希望求得\n",
"\n",
"$$\n",
"j = \\left[\n",
@@ -460,7 +454,7 @@
},
{
"cell_type": "code",
"execution_count": 22,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
@@ -473,7 +467,7 @@
},
{
"cell_type": "code",
"execution_count": 23,
"execution_count": 11,
"metadata": {},
"outputs": [
{
@@ -504,7 +498,7 @@
},
{
"cell_type": "code",
"execution_count": 24,
"execution_count": 12,
"metadata": {
"scrolled": true
},


6_pytorch/3-linear-regression.ipynb
File diff suppressed because it is too large
View File


+ 121
- 215
6_pytorch/4-logistic-regression.ipynb
File diff suppressed because it is too large
View File


+ 0
- 693
6_pytorch/5-deep-nn.ipynb
File diff suppressed because it is too large
View File


+ 149
- 434
6_pytorch/5-nn-sequential-module.ipynb
File diff suppressed because it is too large
View File


+ 671
- 0
6_pytorch/6-deep-nn.ipynb
File diff suppressed because it is too large
View File


6_pytorch/6-param_initialize.ipynb → 6_pytorch/7-param_initialize.ipynb View File

@@ -12,14 +12,14 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"PyTorch 的初始化方式并没有那么显然,如果你使用最原始的方式创建模型,那么需要定义模型中的所有参数,当然这样可以非常方便地定义每个变量的初始化方式,但是对于复杂的模型,这并不容易,而且我们推崇使用 Sequential 和 Module 来定义模型,所以这个时候我们就需要知道如何来自定义初始化方式"
"PyTorch 的初始化方式并没有那么显然,如果你使用最原始的方式创建模型,那么需要定义模型中的所有参数,当然这样可以非常方便地定义每个变量的初始化方式。但是对于复杂的模型,这并不容易,而且推荐使用 Sequential 和 Module 来定义模型,所以这个时候就需要知道如何来自定义初始化方式"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 使用 NumPy 来初始化\n",
"## 1. 使用 NumPy 来初始化\n",
"因为 PyTorch 是一个非常灵活的框架,理论上能够对所有的 Tensor 进行操作,所以我们能够通过定义新的 Tensor 来初始化,直接看下面的例子"
]
},
@@ -162,9 +162,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**小练习:一种非常流行的初始化方式叫 Xavier,方法来源于 2010 年的一篇论文 [Understanding the difficulty of training deep feedforward neural networks](http://proceedings.mlr.press/v9/glorot10a.html),其通过数学的推到,证明了这种初始化方式可以使得每一层的输出方差是尽可能相等的,有兴趣的同学可以去看看论文**\n",
"\n",
"我们给出这种初始化的公式\n",
"一种非常流行的初始化方式叫 Xavier,方法来源于 2010 年的一篇论文 [Understanding the difficulty of training deep feedforward neural networks](http://proceedings.mlr.press/v9/glorot10a.html),其通过数学的推到,证明了这种初始化方式可以使得每一层的输出方差是尽可能相等。这种初始化的公式为:\n",
"\n",
"$$\n",
"w\\ \\sim \\ Uniform[- \\frac{\\sqrt{6}}{\\sqrt{n_j + n_{j+1}}}, \\frac{\\sqrt{6}}{\\sqrt{n_j + n_{j+1}}}]\n",
@@ -340,8 +338,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## torch.nn.init\n",
"因为 PyTorch 灵活的特性,我们可以直接对 Tensor 进行操作从而初始化,PyTorch 也提供了初始化的函数帮助我们快速初始化,就是 `torch.nn.init`,其操作层面仍然在 Tensor 上,下面我们举例说明"
"## 2. `torch.nn.init`\n",
"因为 PyTorch 灵活的特性,可以直接对 Tensor 进行操作从而初始化,PyTorch 也提供了初始化的函数帮助我们快速初始化,就是 `torch.nn.init`,其操作层面仍然在 Tensor 上,下面我们举例说明"
]
},
{
@@ -439,22 +437,20 @@
"source": [
"可以看到参数已经被修改了\n",
"\n",
"`torch.nn.init` 为我们提供了更多的内置初始化方式,避免了我们重复去实现一些相同的操作"
"`torch.nn.init` 提供了更多的内置初始化方式,避免了重复去实现一些相同的操作"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"上面讲了两种初始化方式,其实它们的本质都是一样的,就是去修改某一层参数的实际值,而 `torch.nn.init` 提供了更多成熟的深度学习相关的初始化方式,非常方便\n",
"\n",
"下一节课,我们将讲一下目前流行的各种基于梯度的优化算法"
"上面讲了两种初始化方式,其实它们的本质都是一样的,就是去修改某一层参数的实际值,而 `torch.nn.init` 提供了更多成熟的深度学习相关的初始化方式。\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
@@ -468,7 +464,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.4"
"version": "3.9.7"
}
},
"nbformat": 4,

BIN
6_pytorch/imgs/MNIST.jpeg View File

Before After
Width: 474  |  Height: 203  |  Size: 41 kB

BIN
6_pytorch/imgs/logistic_function.png View File

Before After
Width: 558  |  Height: 375  |  Size: 24 kB

BIN
6_pytorch/imgs/softmax.jpeg View File

Before After
Width: 1597  |  Height: 894  |  Size: 102 kB

+ 4
- 102
6_pytorch/optimizer/6_1-sgd.ipynb View File

@@ -10,107 +10,9 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\n",
"Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ../../../data/MNIST/raw/train-images-idx3-ubyte.gz\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"46.4%IOPub message rate exceeded.\n",
"The notebook server will temporarily stop sending output\n",
"to the client in order to avoid crashing it.\n",
"To change this limit, set the config variable\n",
"`--NotebookApp.iopub_msg_rate_limit`.\n",
"\n",
"Current values:\n",
"NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
"NotebookApp.rate_limit_window=3.0 (secs)\n",
"\n",
"98.4%IOPub message rate exceeded.\n",
"The notebook server will temporarily stop sending output\n",
"to the client in order to avoid crashing it.\n",
"To change this limit, set the config variable\n",
"`--NotebookApp.iopub_msg_rate_limit`.\n",
"\n",
"Current values:\n",
"NotebookApp.iopub_msg_rate_limit=1000.0 (msgs/sec)\n",
"NotebookApp.rate_limit_window=3.0 (secs)\n",
"\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloading http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz to ../../../data/MNIST/raw/train-labels-idx1-ubyte.gz\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"102.8%\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Extracting ../../../data/MNIST/raw/train-labels-idx1-ubyte.gz to ../../../data/MNIST/raw\n",
"\n",
"Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\n",
"Downloading http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz to ../../../data/MNIST/raw/t10k-images-idx3-ubyte.gz\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"100.0%\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Extracting ../../../data/MNIST/raw/t10k-images-idx3-ubyte.gz to ../../../data/MNIST/raw\n",
"\n",
"Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\n",
"Downloading http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz to ../../../data/MNIST/raw/t10k-labels-idx1-ubyte.gz\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"112.7%"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"Extracting ../../../data/MNIST/raw/t10k-labels-idx1-ubyte.gz to ../../../data/MNIST/raw\n",
"\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"\n"
]
}
],
"outputs": [],
"source": [
"import numpy as np\n",
"import torch\n",
@@ -129,8 +31,8 @@
" x = torch.from_numpy(x)\n",
" return x\n",
"\n",
"train_set = MNIST('../../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../../data/mnist', train=False, transform=data_tf, download=True)\n",
"train_set = MNIST('../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../data/mnist', train=False, transform=data_tf, download=True)\n",
"\n",
"# 定义 loss 函数\n",
"criterion = nn.CrossEntropyLoss()"


+ 2
- 2
6_pytorch/optimizer/6_2-momentum.ipynb View File

@@ -104,8 +104,8 @@
" x = torch.from_numpy(x)\n",
" return x\n",
"\n",
"train_set = MNIST('../../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../../data/mnist', train=False, transform=data_tf, download=True)\n",
"train_set = MNIST('../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../data/mnist', train=False, transform=data_tf, download=True)\n",
"\n",
"# 定义 loss 函数\n",
"criterion = nn.CrossEntropyLoss()"


+ 2
- 2
6_pytorch/optimizer/6_3-adagrad.ipynb View File

@@ -68,8 +68,8 @@
" x = torch.from_numpy(x)\n",
" return x\n",
"\n",
"train_set = MNIST('../../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../../data/mnist', train=False, transform=data_tf, download=True)\n",
"train_set = MNIST('../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../data/mnist', train=False, transform=data_tf, download=True)\n",
"\n",
"# 定义 loss 函数\n",
"criterion = nn.CrossEntropyLoss()"


+ 2
- 2
6_pytorch/optimizer/6_4-rmsprop.ipynb View File

@@ -66,8 +66,8 @@
" x = torch.from_numpy(x)\n",
" return x\n",
"\n",
"train_set = MNIST('../../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../../data/mnist', train=False, transform=data_tf, download=True)\n",
"train_set = MNIST('../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../data/mnist', train=False, transform=data_tf, download=True)\n",
"\n",
"# 定义 loss 函数\n",
"criterion = nn.CrossEntropyLoss()"


+ 2
- 2
6_pytorch/optimizer/6_5-adadelta.ipynb View File

@@ -77,8 +77,8 @@
" x = torch.from_numpy(x)\n",
" return x\n",
"\n",
"train_set = MNIST('../../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../../data/mnist', train=False, transform=data_tf, download=True)\n",
"train_set = MNIST('../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../data/mnist', train=False, transform=data_tf, download=True)\n",
"\n",
"# 定义 loss 函数\n",
"criterion = nn.CrossEntropyLoss()"


+ 2
- 2
6_pytorch/optimizer/6_6-adam.ipynb View File

@@ -83,8 +83,8 @@
" x = torch.from_numpy(x)\n",
" return x\n",
"\n",
"train_set = MNIST('../../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../../data/mnist', train=False, transform=data_tf, download=True)\n",
"train_set = MNIST('../../data/mnist', train=True, transform=data_tf, download=True) # 载入数据集,申明定义的数据变换\n",
"test_set = MNIST('../../data/mnist', train=False, transform=data_tf, download=True)\n",
"\n",
"# 定义 loss 函数\n",
"criterion = nn.CrossEntropyLoss()"


BIN
7_deep_learning/imgs/ResNet.png View File

Before After
Width: 616  |  Height: 1295  |  Size: 104 kB

BIN
7_deep_learning/imgs/lena.png View File

Before After
Width: 200  |  Height: 200  |  Size: 23 kB

BIN
7_deep_learning/imgs/lena3.png View File

Before After
Width: 512  |  Height: 512  |  Size: 151 kB

BIN
7_deep_learning/imgs/lena512.png View File

Before After
Width: 512  |  Height: 512  |  Size: 151 kB

BIN
7_deep_learning/imgs/nn_lenet.png View File

Before After
Width: 759  |  Height: 209  |  Size: 17 kB

BIN
7_deep_learning/imgs/residual.png View File

Before After
Width: 317  |  Height: 615  |  Size: 82 kB

BIN
7_deep_learning/imgs/resnet1.png View File

Before After
Width: 1354  |  Height: 269  |  Size: 69 kB

+ 2
- 0
7_deep_learning/imgs/tensor_data_structure.svg
File diff suppressed because it is too large
View File


BIN
7_deep_learning/imgs/trans.bkp.PNG View File

Before After
Width: 510  |  Height: 185  |  Size: 7.8 kB

+ 9
- 11
README.md View File

@@ -39,17 +39,15 @@
- [Multi-layer Perceptron & BP](5_nn/2-mlp_bp.ipynb)
- [Softmax & cross-entroy](5_nn/3-softmax_ce.ipynb)
8. [PyTorch](6_pytorch/README.md)
- Basic
- [Tensor and Variable](6_pytorch/0_basic/1-Tensor-and-Variable.ipynb)
- [autograd](6_pytorch/0_basic/2-autograd.ipynb)
- NN & Optimization
- [nn/linear-regression-gradient-descend](6_pytorch/1_NN/1-linear-regression-gradient-descend.ipynb)
- [nn/logistic-regression](6_pytorch/1_NN/2-logistic-regression.ipynb)
- [nn/nn-sequential-module](6_pytorch/1_NN/3-nn-sequential-module.ipynb)
- [nn/deep-nn](6_pytorch/1_NN/4-deep-nn.ipynb)
- [nn/param_initialize](6_pytorch/1_NN/5-param_initialize.ipynb)
- [optim/sgd](6_pytorch/1_NN/optimizer/6_1-sgd.ipynb)
- [optim/adam](6_pytorch/1_NN/optimizer/6_6-adam.ipynb)
- [Tensor](6_pytorch/1-tensor.ipynb)
- [autograd](6_pytorch/2-autograd.ipynb)
- [linear-regression](6_pytorch/3-linear-regression.ipynb)
- [logistic-regression](6_pytorch/4-logistic-regression.ipynb)
- [nn-sequential-module](6_pytorch/5-nn-sequential-module.ipynb)
- [deep-nn](6_pytorch/6-deep-nn.ipynb)
- [param_initialize](6_pytorch/7-param_initialize.ipynb)
- [optim/sgd](6_pytorch/optimizer/6_1-sgd.ipynb)
- [optim/adam](6_pytorch/optimizer/6_6-adam.ipynb)
9. [Deep Learning](7_deep_learning/README.md)
- CNN
- [CNN Introduction](7_deep_learning/1_CNN/CNN_Introduction.pptx)


+ 75
- 72
README_ENG.md View File

@@ -1,16 +1,21 @@
# 机器学习
# 机器学习与人工智能

本教程主要讲解机器学习的基本原理与实现,通过本教程的引导来快速学习Python、Python常用库、机器学习的理论知识与实际编程,并学习如何解决实际问题
机器学习越来越多应用到飞行器、机器人等领域,其目的是利用计算机实现类似人类的智能,从而实现装备的智能化与无人化。本课程旨在引导学生掌握机器学习的基本知识、典型方法与技术,通过具体的应用案例激发学生对该学科的兴趣,鼓励学生能够从人工智能的角度来分析、解决飞行器、机器人所面临的问题和挑战。本课程主要内容包括Python编程基础,机器学习模型,无监督学习、监督学习、深度学习基础知识与实现,并学习如何利用机器学习解决实际问题,从而全面提升自我的[《综合能力》](Targets.md)

由于**本课程需要大量的编程练习才能取得比较好的学习效果**,因此需要认真去完成[作业和报告](https://gitee.com/pi-lab/machinelearning_homework),写作业的过程可以查阅网上的资料,但是不能直接照抄,需要自己独立思考并独立写出代码。
由于**本课程需要大量的编程练习才能取得比较好的学习效果**,因此需要认真去完成[《机器学习与人工智能-作业和报告](https://gitee.com/pi-lab/machinelearning_homework),写作业的过程可以查阅网上的资料,但是不能直接照抄,需要自己独立思考并独立写出代码。本教程的Python等运行环境的安装说明请参考[《Python环境安装》](references_tips/InstallPython.md)。

![Machine Learning Cover](images/machine_learning.png)
为了让大家更好的自学本课程,课程讲座的视频在[《B站 - 机器学习与人工智能》](https://www.bilibili.com/video/BV1oZ4y1N7ei/),欢迎大家观看学习。



![Machine Learning Cover](images/machine_learning_1.jpg)


## 1. 内容
1. [课程简介](CourseIntroduction.pdf)
2. [Python](0_python/)
- [Install Python](tips/InstallPython.md)
2. [Python](0_python/README.md)
- [Install Python](references_tips/InstallPython.md)
- [ipython & notebook](0_python/0-ipython_notebook.ipynb)
- [Python Basics](0_python/1_Basics.ipynb)
- [Print Statement](0_python/2_Print_Statement.ipynb)
- [Data Structure 1](0_python/3_Data_Structure_1.ipynb)
@@ -18,93 +23,91 @@
- [Control Flow](0_python/5_Control_Flow.ipynb)
- [Function](0_python/6_Function.ipynb)
- [Class](0_python/7_Class.ipynb)
3. [numpy & matplotlib](1_numpy_matplotlib_scipy_sympy/)
- [numpy](1_numpy_matplotlib_scipy_sympy/numpy_tutorial.ipynb)
- [matplotlib](1_numpy_matplotlib_scipy_sympy/matplotlib_simple_tutorial.ipynb)
- [ipython & notebook](1_numpy_matplotlib_scipy_sympy/ipython_notebook.ipynb)
4. [knn](2_knn/knn_classification.ipynb)
5. [kMenas](3_kmeans/k-means.ipynb)
3. [numpy & matplotlib](1_numpy_matplotlib_scipy_sympy/README.md)
- [numpy](1_numpy_matplotlib_scipy_sympy/1-numpy_tutorial.ipynb)
- [matplotlib](1_numpy_matplotlib_scipy_sympy/2-matplotlib_tutorial.ipynb)
4. [kNN](2_knn/knn_classification.ipynb)
5. [kMeans](3_kmeans/1-k-means.ipynb)
- [kMeans - Image Compression](3_kmeans/2-kmeans-color-vq.ipynb)
- [Cluster Algorithms](3_kmeans/3-ClusteringAlgorithms.ipynb)
6. [Logistic Regression](4_logistic_regression/)
- [Least squares](4_logistic_regression/Least_squares.ipynb)
- [Logistic regression](4_logistic_regression/Logistic_regression.ipynb)
- [Least squares](4_logistic_regression/1-Least_squares.ipynb)
- [Logistic regression](4_logistic_regression/2-Logistic_regression.ipynb)
- [PCA and Logistic regression](4_logistic_regression/3-PCA_and_Logistic_Regression.ipynb)
7. [Neural Network](5_nn/)
- [Perceptron](5_nn/Perceptron.ipynb)
- [Multi-layer Perceptron & BP](5_nn/mlp_bp.ipynb)
- [Softmax & cross-entroy](5_nn/softmax_ce.ipynb)
8. [PyTorch](6_pytorch/)
- Basic
- [short tutorial](6_pytorch/PyTorch_quick_intro.ipynb)
- [basic/Tensor-and-Variable](6_pytorch/0_basic/Tensor-and-Variable.ipynb)
- [basic/autograd](6_pytorch/0_basic/autograd.ipynb)
- [basic/dynamic-graph](6_pytorch/0_basic/dynamic-graph.ipynb)
- NN & Optimization
- [nn/linear-regression-gradient-descend](6_pytorch/1_NN/linear-regression-gradient-descend.ipynb)
- [nn/logistic-regression](6_pytorch/1_NN/logistic-regression.ipynb)
- [nn/nn-sequential-module](6_pytorch/1_NN/nn-sequential-module.ipynb)
- [nn/bp](6_pytorch/1_NN/bp.ipynb)
- [nn/deep-nn](6_pytorch/1_NN/deep-nn.ipynb)
- [nn/param_initialize](6_pytorch/1_NN/param_initialize.ipynb)
- [optim/sgd](6_pytorch/1_NN/optimizer/sgd.ipynb)
- [optim/adam](6_pytorch/1_NN/optimizer/adam.ipynb)
- [Perceptron](5_nn/1-Perceptron.ipynb)
- [Multi-layer Perceptron & BP](5_nn/2-mlp_bp.ipynb)
- [Softmax & cross-entroy](5_nn/3-softmax_ce.ipynb)
8. [PyTorch](6_pytorch/README.md)
- [Tensor](6_pytorch/1-tensor.ipynb)
- [autograd](6_pytorch/2-autograd.ipynb)
- [linear-regression](6_pytorch/3-linear-regression.ipynb)
- [logistic-regression](6_pytorch/4-logistic-regression.ipynb)
- [nn-sequential-module](6_pytorch/5-nn-sequential-module.ipynb)
- [deep-nn](6_pytorch/6-deep-nn.ipynb)
- [param_initialize](6_pytorch/7-param_initialize.ipynb)
- [optim/sgd](6_pytorch/optimizer/6_1-sgd.ipynb)
- [optim/adam](6_pytorch/optimizer/6_6-adam.ipynb)
9. [Deep Learning](7_deep_learning/README.md)
- CNN
- [CNN Introduction](7_deep_learning/1_CNN/CNN_Introduction.pptx)
- [CNN simple demo](demo_code/3_CNN_MNIST.py)
- [cnn/basic_conv](6_pytorch/2_CNN/basic_conv.ipynb)
- [cnn/minist (demo code)](./demo_code/3_CNN_MNIST.py)
- [cnn/batch-normalization](6_pytorch/2_CNN/batch-normalization.ipynb)
- [cnn/regularization](6_pytorch/2_CNN/regularization.ipynb)
- [cnn/lr-decay](6_pytorch/2_CNN/lr-decay.ipynb)
- [cnn/vgg](6_pytorch/2_CNN/vgg.ipynb)
- [cnn/googlenet](6_pytorch/2_CNN/googlenet.ipynb)
- [cnn/resnet](6_pytorch/2_CNN/resnet.ipynb)
- [cnn/densenet](6_pytorch/2_CNN/densenet.ipynb)
- [cnn/basic_conv](7_deep_learning/1_CNN/1-basic_conv.ipynb)
- [cnn/batch-normalization](7_deep_learning/1_CNN/2-batch-normalization.ipynb)
- [cnn/lr-decay](7_deep_learning/2_CNN/1-lr-decay.ipynb)
- [cnn/regularization](7_deep_learning/1_CNN/4-regularization.ipynb)
- [cnn/vgg](7_deep_learning/1_CNN/6-vgg.ipynb)
- [cnn/googlenet](7_deep_learning/1_CNN/7-googlenet.ipynb)
- [cnn/resnet](7_deep_learning/1_CNN/8-resnet.ipynb)
- [cnn/densenet](7_deep_learning/1_CNN/9-densenet.ipynb)
- RNN
- [rnn/pytorch-rnn](6_pytorch/3_RNN/pytorch-rnn.ipynb)
- [rnn/rnn-for-image](6_pytorch/3_RNN/rnn-for-image.ipynb)
- [rnn/lstm-time-series](6_pytorch/3_RNN/time-series/lstm-time-series.ipynb)
- [rnn/pytorch-rnn](7_deep_learning/2_RNN/pytorch-rnn.ipynb)
- [rnn/rnn-for-image](7_deep_learning/2_RNN/rnn-for-image.ipynb)
- [rnn/lstm-time-series](7_deep_learning/2_RNN/time-series/lstm-time-series.ipynb)
- GAN
- [gan/autoencoder](6_pytorch/4_GAN/autoencoder.ipynb)
- [gan/vae](6_pytorch/4_GAN/vae.ipynb)
- [gan/gan](6_pytorch/4_GAN/gan.ipynb)
- [gan/autoencoder](7_deep_learning/3_GAN/autoencoder.ipynb)
- [gan/vae](7_deep_learning/3_GAN/vae.ipynb)
- [gan/gan](7_deep_learning/3_GAN/gan.ipynb)



## 2. 学习的建议
1. 为了更好的学习本课程,需要大家把Python编程的基础能力培养好,这样后续的机器学习方法学习才比较扎实。
2. 每个课程前部分是理论基础,然后是代码实现。个人如果想学的更扎实,可以自己把各个方法的代码亲自实现一下。做的过程尽可能自己想解决办法,因为重要的学习目标不是代码本身,而是学会分析问题、解决问题的能力。
1. 为了更好的学习本课程,需要大家把Python编程能力培养好,通过一定数量的练习题、小项目培养Python编程思维,为后续的机器学习理论与实践打好坚实的基础。
2. 每个课程前半部分是理论基础,后半部分是代码实现。如果想学的更扎实,可以自己把各个方法的代码亲自实现一下。做的过程如果遇到问题尽可能自己想解决办法,因为最重要的目标不是代码本身,而是学会分析问题、解决问题的能力。
3. **不能直接抄已有的程序,或者抄别人的程序**,如果自己不会要自己去想,去找解决方法,或者去问。如果直接抄别人的代码,这样的练习一点意义都没有。**如果感觉太难,可以做的慢一些,但是坚持自己思考、自己编写练习代码**。。
4. **请先遍历一遍所有的文件夹,了解有什么内容,资料**。各个目录里有很多说明文档,如果不会先找找有没有文档,如果找不到合适的文档就去网上找找。通过这个过程锻炼自己搜索文献、资料的能力。
5. 本课程的练习题最好使用[《Linux》](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/linux)以及Linux下的工具来做。逼迫自己使用[《Linux》](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/linux),只有多练、多用才能快速进步。如果实在太难,先在虚拟机(建议VirtualBox)里装一个Linux(例如Ubuntu,或者LinuxMint等),先熟悉一下。但是最终需要学会使用Linux。


## 3. 其他参考资料

## 3. 参考资料
* 资料速查
* [相关学习参考资料汇总](References.md)
* [一些速查手册](tips/cheatsheet)
* [一些速查手册](references_tips/cheatsheet)

* 机器学习方面技巧等
* [Confusion Matrix](tips/confusion_matrix.ipynb)
* [Datasets](tips/datasets.ipynb)
* [构建深度神经网络的一些实战建议](tips/构建深度神经网络的一些实战建议.md)
* [Intro to Deep Learning](tips/Intro_to_Deep_Learning.pdf)
* [Confusion Matrix](references_tips/confusion_matrix.ipynb)
* [Datasets](references_tips/datasets.ipynb)
* [构建深度神经网络的一些实战建议](references_tips/构建深度神经网络的一些实战建议.md)
* [Intro to Deep Learning](references_tips/Intro_to_Deep_Learning.pdf)

* Python技巧等
* [安装Python环境](tips/InstallPython.md)
* [Python tips](tips/python)
* [安装Python环境](references_tips/InstallPython.md)
* [Python tips](references_tips/python)

* [Git教程](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/git/README.md)
* [Markdown教程](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/markdown/README.md)

* Git
* [Git Tips - 常用方法速查,快速入门](https://gitee.com/pi-lab/learn_programming/blob/master/6_tools/git/git-tips.md)
* [Git快速入门 - Git初体验](https://my.oschina.net/dxqr/blog/134811)
* [在win7系统下使用TortoiseGit(乌龟git)简单操作Git](https://my.oschina.net/longxuu/blog/141699)
* [Git系统学习 - 廖雪峰的Git教程](https://www.liaoxuefeng.com/wiki/0013739516305929606dd18361248578c67b8067c8c017b000)

* Markdown
* [Markdown——入门指南](https://www.jianshu.com/p/1e402922ee32)


## 4. 相关学习资料参考
## 4. 更进一步学习

在上述内容学习完成之后,可以进行更进一步机器学习、计算机视觉方面的学习与研究,具体的资料可以参考:
1. [《一步一步学编程》](https://gitee.com/pi-lab/learn_programming)
2. 智能系统实验室-培训教程与作业
- [《智能系统实验室-暑期培训教程》](https://gitee.com/pi-lab/SummerCamp)
- [《智能系统实验室-暑期培训作业》](https://gitee.com/pi-lab/SummerCampHomework)
3. [《智能系统实验室研究课题》](https://gitee.com/pi-lab/pilab_research_fields)
4. [《编程代码参考、技巧集合》](https://gitee.com/pi-lab/code_cook)
- 可以在这个代码、技巧集合中找到某项功能的示例,从而加快自己代码的编写
1. 编程是机器学习研究、实现过程非常重要的能力,编程能力弱则无法快速试错,导致学习、研究进度缓慢;如果编程能力强,则可以快速试错,快速编写实验代码等。强烈建议大家在学习本课程之后或之中,好好把数据结构、算法等基本功锻炼一下。具体的教程可以参考[《一步一步学编程》](https://gitee.com/pi-lab/learn_programming)
2. 飞行器智能感知与控制实验室-培训教程与作业:这个教程是实验室积累的机器学习与计算机视觉方面的教程集合,每个课程介绍基本的原理、编程实现、应用方法等资料,可以作为快速入门的学习材料。
- [《飞行器智能感知与控制实验室-暑期培训教程》](https://gitee.com/pi-lab/SummerCamp)
- [《飞行器智能感知与控制实验室-暑期培训作业》](https://gitee.com/pi-lab/SummerCampHomework)
3. 视觉SLAM是一类算法、技巧、编程高度集成的系统,通过学习、练习SLAM能够极大的提高自己的编程、解决问题能力。具体的教程可以参考[《一步一步学SLAM》](https://gitee.com/pi-lab/learn_slam)
3. [《编程代码参考、技巧集合》](https://gitee.com/pi-lab/code_cook):可以在这个代码、技巧集合中找到某项功能的示例,从而加快自己代码的编写
5. [《学习方法论与技巧》](https://gitee.com/pi-lab/pilab_research_fields)

Loading…
Cancel
Save