生成对抗网络简介
生成对抗网络(GAN
)启发自博弈论中的二人零和博弈(two-player game),类似于周伯通的绝学——“左右互搏”。GAN
模型中的两位博弈方分别由生成式模型(generative model
)和判别式模型(discriminative model
)充当。生成模型 G
捕捉样本数据的分布,用服从某一分布(均匀分布,高斯分布等)的噪声 z
生成一个类似真实训练数据的样本,追求效果是越像真实样本越好;判别模型 D
是一个二分类器,估计一个样本来自于训练数据(而非生成数据)的概率,如果样本来自于真实的训练数据,D
输出大概率,否则,D
输出小概率。可以做如下类比:生成网络 G
好比假币制造团伙,专门制造假币,判别网络 D
好比警察,专门检测使用的货币是真币还是假币,G
的目标是想方设法生成和真币一样的货币,使得 D
判别不出来,D
的目标是想方设法检测出来 G
生成的假币。随着训练时间的增加,判别模型与生成模型的能力都相应的提升!
具体生成网络的示意图如下所示:
Tensorflow生成对抗网络实现
1 | from __future__ import division, print_function, absolute_import |
导入数据集
1 | # 导入mnist数据集 |
Extracting ./data/train-images-idx3-ubyte.gz
Extracting ./data/train-labels-idx1-ubyte.gz
Extracting ./data/t10k-images-idx3-ubyte.gz
Extracting ./data/t10k-labels-idx1-ubyte.gz
参数设置
1 | # Training Params |
Xavier 初始化方式方差:
这里的参数是标准差。
设置每一层的权重与偏置
1 | # 设置每一层的权重(Xavier初始化)与偏置(初始化为零) |
定义生成对抗网络
1 | # 定义生成器函数 |
GAN的网络结构类似于多层感知机:
训练生成对抗网络
1 | # 开始训练 |
Step 1: Generator Loss: 0.223592, Discriminator Loss: 2.090910
Step 2000: Generator Loss: 4.678916, Discriminator Loss: 0.041115
Step 4000: Generator Loss: 3.605874, Discriminator Loss: 0.068698
Step 6000: Generator Loss: 3.845584, Discriminator Loss: 0.190420
Step 8000: Generator Loss: 4.470613, Discriminator Loss: 0.117488
Step 10000: Generator Loss: 3.813103, Discriminator Loss: 0.146255
Step 12000: Generator Loss: 2.991248, Discriminator Loss: 0.392258
Step 14000: Generator Loss: 3.769275, Discriminator Loss: 0.153639
Step 16000: Generator Loss: 4.366917, Discriminator Loss: 0.206618
Step 18000: Generator Loss: 4.052875, Discriminator Loss: 0.225112
Step 20000: Generator Loss: 3.574747, Discriminator Loss: 0.362798
Step 22000: Generator Loss: 3.760236, Discriminator Loss: 0.188211
Step 24000: Generator Loss: 3.055995, Discriminator Loss: 0.354645
Step 26000: Generator Loss: 3.619049, Discriminator Loss: 0.211489
Step 28000: Generator Loss: 3.523777, Discriminator Loss: 0.273607
Step 30000: Generator Loss: 3.889854, Discriminator Loss: 0.286803
Step 32000: Generator Loss: 3.106094, Discriminator Loss: 0.298111
Step 34000: Generator Loss: 3.548391, Discriminator Loss: 0.343262
Step 36000: Generator Loss: 3.081174, Discriminator Loss: 0.332788
Step 38000: Generator Loss: 2.946176, Discriminator Loss: 0.335102
Step 40000: Generator Loss: 3.078653, Discriminator Loss: 0.465524
Step 42000: Generator Loss: 2.601799, Discriminator Loss: 0.409574
Step 44000: Generator Loss: 3.168177, Discriminator Loss: 0.325075
Step 46000: Generator Loss: 2.601811, Discriminator Loss: 0.428143
Step 48000: Generator Loss: 2.853810, Discriminator Loss: 0.403768
Step 50000: Generator Loss: 2.690175, Discriminator Loss: 0.483180
Step 52000: Generator Loss: 3.278867, Discriminator Loss: 0.375016
Step 54000: Generator Loss: 2.869437, Discriminator Loss: 0.477840
Step 56000: Generator Loss: 2.561056, Discriminator Loss: 0.449300
Step 58000: Generator Loss: 2.814199, Discriminator Loss: 0.484522
Step 60000: Generator Loss: 2.469474, Discriminator Loss: 0.428359
Step 62000: Generator Loss: 2.721684, Discriminator Loss: 0.494090
Step 64000: Generator Loss: 2.491284, Discriminator Loss: 0.654795
Step 66000: Generator Loss: 2.725388, Discriminator Loss: 0.423149
Step 68000: Generator Loss: 2.758215, Discriminator Loss: 0.513224
Step 70000: Generator Loss: 3.072056, Discriminator Loss: 0.481437
测试
1 | # 测试 |
参考
[1] 机器之心GitHub项目:GAN完整理论推导与实现,Perfect!