pytorch API 用法示例

示例代码版本号 pytorch 0.30 Python 3.5

  1. torch.nn.Softmax()
import torch
import torch.nn as nn
from torch.autograd import Variable
import numpy as np

bottom = np.random.randint(0,10,(1,2,3,3)).astype('float32')

print(bottom)
>>>[[[[7. 8. 7.]
 [3. 5. 2.]
 [8. 5. 9.]]

[[2. 3. 6.]
 [7. 8. 9.]
 [4. 6. 6.]]]]

bt = Variable(torch.from_numpy(bottom).cuda())
sf = nn.Softmax(dim= 1)# 分别对 dim = 1 (即在每个pixel 的通道)单独做 softmax 操作,由输出可以知道每个通道的值相加为 1 dim 取几就对对应的维度单独做 softmax。nn.Softmax2d() 效果等于nn.Softmax(dim= 1)(就 4d 矩阵来说)
print(sf(bt))
>>>
Variable containing:
(0 ,0 ,.,.) =
 0.9933 0.9933 0.7311
 0.0180 0.0474 0.0009
 0.9820 0.2689 0.9526

(0 ,1 ,.,.) =
 0.0067 0.0067 0.2689
 0.9820 0.9526 0.9991
 0.0180 0.7311 0.0474
[torch.cuda.FloatTensor of size 1x2x3x3 (GPU 0)]

print (torch.sum( sf(bt)))
>>>
Variable containing:
 9
[torch.cuda.FloatTensor of size 1 (GPU 0)]
Share this to:

发表评论

电子邮件地址不会被公开。 必填项已用*标注