精品欧美一区二区三区在线观看 _久久久久国色av免费观看性色_国产精品久久在线观看_亚洲第一综合网站_91精品又粗又猛又爽_小泽玛利亚一区二区免费_91亚洲精品国偷拍自产在线观看 _久久精品视频在线播放_美女精品久久久_欧美日韩国产成人在线

【TVM 教程】如何使用 TVM Pass Instrument 原創(chuàng)

發(fā)布于 2025-6-16 17:26
瀏覽
0收藏

Apache TVM 是一個(gè)深度的深度學(xué)習(xí)編譯框架,適用于 CPU、GPU 和各種機(jī)器學(xué)習(xí)加速芯片。更多 TVM 中文文檔可訪問(wèn) →https://tvm.hyper.ai/

作者:Chi-Wei Wang

隨著實(shí)現(xiàn)的 Pass 越來(lái)越多,instrument pass 執(zhí)行、分析每個(gè) Pass 效果和觀察各種事件也愈發(fā)重要。

可以通過(guò)向 tvm.transform.PassContext 提供 tvm.ir.instrument.PassInstrument 實(shí)例列表來(lái)檢測(cè) Pass。我們提供了一個(gè)用于收集計(jì)時(shí)信息的 pass 工具(tvm.ir.instrument.PassTimingInstrument),可以通過(guò) tvm.instrument.pass_instrument() 裝飾器使用擴(kuò)展機(jī)制。

本教程演示開發(fā)者如何用 PassContext 檢測(cè) Pass。另請(qǐng)參閱 Pass Infrastructure。

import tvm
import tvm.relay as relay
from tvm.relay.testing import resnet
from tvm.contrib.download import download_testdata
from tvm.relay.build_module import bind_params_by_name
from tvm.ir.instrument import (
    PassTimingInstrument,
    pass_instrument,
)

創(chuàng)建 Relay 程序示例?

在 Relay 中使用預(yù)定義的 ResNet-18 網(wǎng)絡(luò)。

batch_size = 1
num_of_image_class = 1000
image_shape = (3, 224, 224)
output_shape = (batch_size, num_of_image_class)
relay_mod, relay_params = resnet.get_workload(num_layers=18, batch_size=1, image_shape=image_shape)
print("Printing the IR module...")
print(relay_mod.astext(show_meta_data=False))

輸出結(jié)果:

Printing the IR module...
#[version = "0.0.5"]
def @main(%data: Tensor[(1, 3, 224, 224), float32] /* ty=Tensor[(1, 3, 224, 224), float32] */, %bn_data_gamma: Tensor[(3), float32] /* ty=Tensor[(3), float32] */, %bn_data_beta: Tensor[(3), float32] /* ty=Tensor[(3), float32] */, %bn_data_moving_mean: Tensor[(3), float32] /* ty=Tensor[(3), float32] */, %bn_data_moving_var: Tensor[(3), float32] /* ty=Tensor[(3), float32] */, %conv0_weight: Tensor[(64, 3, 7, 7), float32] /* ty=Tensor[(64, 3, 7, 7), float32] */, %bn0_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %bn0_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %bn0_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %bn0_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn1_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn1_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn1_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn1_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_conv1_weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %stage1_unit1_bn2_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn2_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn2_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_bn2_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit1_conv2_weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %stage1_unit1_sc_weight: Tensor[(64, 64, 1, 1), float32] /* ty=Tensor[(64, 64, 1, 1), float32] */, %stage1_unit2_bn1_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn1_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn1_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn1_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_conv1_weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %stage1_unit2_bn2_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn2_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn2_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_bn2_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage1_unit2_conv2_weight: Tensor[(64, 64, 3, 3), float32] /* ty=Tensor[(64, 64, 3, 3), float32] */, %stage2_unit1_bn1_gamma: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage2_unit1_bn1_beta: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage2_unit1_bn1_moving_mean: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage2_unit1_bn1_moving_var: Tensor[(64), float32] /* ty=Tensor[(64), float32] */, %stage2_unit1_conv1_weight: Tensor[(128, 64, 3, 3), float32] /* ty=Tensor[(128, 64, 3, 3), float32] */, %stage2_unit1_bn2_gamma: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit1_bn2_beta: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit1_bn2_moving_mean: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit1_bn2_moving_var: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit1_conv2_weight: Tensor[(128, 128, 3, 3), float32] /* ty=Tensor[(128, 128, 3, 3), float32] */, %stage2_unit1_sc_weight: Tensor[(128, 64, 1, 1), float32] /* ty=Tensor[(128, 64, 1, 1), float32] */, %stage2_unit2_bn1_gamma: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn1_beta: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn1_moving_mean: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn1_moving_var: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_conv1_weight: Tensor[(128, 128, 3, 3), float32] /* ty=Tensor[(128, 128, 3, 3), float32] */, %stage2_unit2_bn2_gamma: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn2_beta: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn2_moving_mean: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_bn2_moving_var: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage2_unit2_conv2_weight: Tensor[(128, 128, 3, 3), float32] /* ty=Tensor[(128, 128, 3, 3), float32] */, %stage3_unit1_bn1_gamma: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage3_unit1_bn1_beta: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage3_unit1_bn1_moving_mean: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage3_unit1_bn1_moving_var: Tensor[(128), float32] /* ty=Tensor[(128), float32] */, %stage3_unit1_conv1_weight: Tensor[(256, 128, 3, 3), float32] /* ty=Tensor[(256, 128, 3, 3), float32] */, %stage3_unit1_bn2_gamma: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit1_bn2_beta: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit1_bn2_moving_mean: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit1_bn2_moving_var: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit1_conv2_weight: Tensor[(256, 256, 3, 3), float32] /* ty=Tensor[(256, 256, 3, 3), float32] */, %stage3_unit1_sc_weight: Tensor[(256, 128, 1, 1), float32] /* ty=Tensor[(256, 128, 1, 1), float32] */, %stage3_unit2_bn1_gamma: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn1_beta: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn1_moving_mean: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn1_moving_var: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_conv1_weight: Tensor[(256, 256, 3, 3), float32] /* ty=Tensor[(256, 256, 3, 3), float32] */, %stage3_unit2_bn2_gamma: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn2_beta: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn2_moving_mean: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_bn2_moving_var: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage3_unit2_conv2_weight: Tensor[(256, 256, 3, 3), float32] /* ty=Tensor[(256, 256, 3, 3), float32] */, %stage4_unit1_bn1_gamma: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage4_unit1_bn1_beta: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage4_unit1_bn1_moving_mean: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage4_unit1_bn1_moving_var: Tensor[(256), float32] /* ty=Tensor[(256), float32] */, %stage4_unit1_conv1_weight: Tensor[(512, 256, 3, 3), float32] /* ty=Tensor[(512, 256, 3, 3), float32] */, %stage4_unit1_bn2_gamma: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit1_bn2_beta: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit1_bn2_moving_mean: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit1_bn2_moving_var: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit1_conv2_weight: Tensor[(512, 512, 3, 3), float32] /* ty=Tensor[(512, 512, 3, 3), float32] */, %stage4_unit1_sc_weight: Tensor[(512, 256, 1, 1), float32] /* ty=Tensor[(512, 256, 1, 1), float32] */, %stage4_unit2_bn1_gamma: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn1_beta: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn1_moving_mean: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn1_moving_var: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_conv1_weight: Tensor[(512, 512, 3, 3), float32] /* ty=Tensor[(512, 512, 3, 3), float32] */, %stage4_unit2_bn2_gamma: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn2_beta: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn2_moving_mean: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_bn2_moving_var: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %stage4_unit2_conv2_weight: Tensor[(512, 512, 3, 3), float32] /* ty=Tensor[(512, 512, 3, 3), float32] */, %bn1_gamma: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %bn1_beta: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %bn1_moving_mean: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %bn1_moving_var: Tensor[(512), float32] /* ty=Tensor[(512), float32] */, %fc1_weight: Tensor[(1000, 512), float32] /* ty=Tensor[(1000, 512), float32] */, %fc1_bias: Tensor[(1000), float32] /* ty=Tensor[(1000), float32] */) -> Tensor[(1, 1000), float32] {
  %0 = nn.batch_norm(%data, %bn_data_gamma, %bn_data_beta, %bn_data_moving_mean, %bn_data_moving_var, epsilon=2e-05f, scale=False) /* ty=(Tensor[(1, 3, 224, 224), float32], Tensor[(3), float32], Tensor[(3), float32]) */;
  %1 = %0.0 /* ty=Tensor[(1, 3, 224, 224), float32] */;
  %2 = nn.conv2d(%1, %conv0_weight, strides=[2, 2], padding=[3, 3, 3, 3], channels=64, kernel_size=[7, 7]) /* ty=Tensor[(1, 64, 112, 112), float32] */;
  %3 = nn.batch_norm(%2, %bn0_gamma, %bn0_beta, %bn0_moving_mean, %bn0_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 112, 112), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %4 = %3.0 /* ty=Tensor[(1, 64, 112, 112), float32] */;
  %5 = nn.relu(%4) /* ty=Tensor[(1, 64, 112, 112), float32] */;
  %6 = nn.max_pool2d(%5, pool_size=[3, 3], strides=[2, 2], padding=[1, 1, 1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %7 = nn.batch_norm(%6, %stage1_unit1_bn1_gamma, %stage1_unit1_bn1_beta, %stage1_unit1_bn1_moving_mean, %stage1_unit1_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %8 = %7.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %9 = nn.relu(%8) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %10 = nn.conv2d(%9, %stage1_unit1_conv1_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %11 = nn.batch_norm(%10, %stage1_unit1_bn2_gamma, %stage1_unit1_bn2_beta, %stage1_unit1_bn2_moving_mean, %stage1_unit1_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %12 = %11.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %13 = nn.relu(%12) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %14 = nn.conv2d(%13, %stage1_unit1_conv2_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %15 = nn.conv2d(%9, %stage1_unit1_sc_weight, padding=[0, 0, 0, 0], channels=64, kernel_size=[1, 1]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %16 = add(%14, %15) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %17 = nn.batch_norm(%16, %stage1_unit2_bn1_gamma, %stage1_unit2_bn1_beta, %stage1_unit2_bn1_moving_mean, %stage1_unit2_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %18 = %17.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %19 = nn.relu(%18) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %20 = nn.conv2d(%19, %stage1_unit2_conv1_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %21 = nn.batch_norm(%20, %stage1_unit2_bn2_gamma, %stage1_unit2_bn2_beta, %stage1_unit2_bn2_moving_mean, %stage1_unit2_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %22 = %21.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %23 = nn.relu(%22) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %24 = nn.conv2d(%23, %stage1_unit2_conv2_weight, padding=[1, 1, 1, 1], channels=64, kernel_size=[3, 3]) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %25 = add(%24, %16) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %26 = nn.batch_norm(%25, %stage2_unit1_bn1_gamma, %stage2_unit1_bn1_beta, %stage2_unit1_bn1_moving_mean, %stage2_unit1_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 64, 56, 56), float32], Tensor[(64), float32], Tensor[(64), float32]) */;
  %27 = %26.0 /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %28 = nn.relu(%27) /* ty=Tensor[(1, 64, 56, 56), float32] */;
  %29 = nn.conv2d(%28, %stage2_unit1_conv1_weight, strides=[2, 2], padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %30 = nn.batch_norm(%29, %stage2_unit1_bn2_gamma, %stage2_unit1_bn2_beta, %stage2_unit1_bn2_moving_mean, %stage2_unit1_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;
  %31 = %30.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %32 = nn.relu(%31) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %33 = nn.conv2d(%32, %stage2_unit1_conv2_weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %34 = nn.conv2d(%28, %stage2_unit1_sc_weight, strides=[2, 2], padding=[0, 0, 0, 0], channels=128, kernel_size=[1, 1]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %35 = add(%33, %34) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %36 = nn.batch_norm(%35, %stage2_unit2_bn1_gamma, %stage2_unit2_bn1_beta, %stage2_unit2_bn1_moving_mean, %stage2_unit2_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;
  %37 = %36.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %38 = nn.relu(%37) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %39 = nn.conv2d(%38, %stage2_unit2_conv1_weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %40 = nn.batch_norm(%39, %stage2_unit2_bn2_gamma, %stage2_unit2_bn2_beta, %stage2_unit2_bn2_moving_mean, %stage2_unit2_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;
  %41 = %40.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %42 = nn.relu(%41) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %43 = nn.conv2d(%42, %stage2_unit2_conv2_weight, padding=[1, 1, 1, 1], channels=128, kernel_size=[3, 3]) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %44 = add(%43, %35) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %45 = nn.batch_norm(%44, %stage3_unit1_bn1_gamma, %stage3_unit1_bn1_beta, %stage3_unit1_bn1_moving_mean, %stage3_unit1_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 128, 28, 28), float32], Tensor[(128), float32], Tensor[(128), float32]) */;
  %46 = %45.0 /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %47 = nn.relu(%46) /* ty=Tensor[(1, 128, 28, 28), float32] */;
  %48 = nn.conv2d(%47, %stage3_unit1_conv1_weight, strides=[2, 2], padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %49 = nn.batch_norm(%48, %stage3_unit1_bn2_gamma, %stage3_unit1_bn2_beta, %stage3_unit1_bn2_moving_mean, %stage3_unit1_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;
  %50 = %49.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %51 = nn.relu(%50) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %52 = nn.conv2d(%51, %stage3_unit1_conv2_weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %53 = nn.conv2d(%47, %stage3_unit1_sc_weight, strides=[2, 2], padding=[0, 0, 0, 0], channels=256, kernel_size=[1, 1]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %54 = add(%52, %53) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %55 = nn.batch_norm(%54, %stage3_unit2_bn1_gamma, %stage3_unit2_bn1_beta, %stage3_unit2_bn1_moving_mean, %stage3_unit2_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;
  %56 = %55.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %57 = nn.relu(%56) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %58 = nn.conv2d(%57, %stage3_unit2_conv1_weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %59 = nn.batch_norm(%58, %stage3_unit2_bn2_gamma, %stage3_unit2_bn2_beta, %stage3_unit2_bn2_moving_mean, %stage3_unit2_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;
  %60 = %59.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %61 = nn.relu(%60) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %62 = nn.conv2d(%61, %stage3_unit2_conv2_weight, padding=[1, 1, 1, 1], channels=256, kernel_size=[3, 3]) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %63 = add(%62, %54) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %64 = nn.batch_norm(%63, %stage4_unit1_bn1_gamma, %stage4_unit1_bn1_beta, %stage4_unit1_bn1_moving_mean, %stage4_unit1_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 256, 14, 14), float32], Tensor[(256), float32], Tensor[(256), float32]) */;
  %65 = %64.0 /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %66 = nn.relu(%65) /* ty=Tensor[(1, 256, 14, 14), float32] */;
  %67 = nn.conv2d(%66, %stage4_unit1_conv1_weight, strides=[2, 2], padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %68 = nn.batch_norm(%67, %stage4_unit1_bn2_gamma, %stage4_unit1_bn2_beta, %stage4_unit1_bn2_moving_mean, %stage4_unit1_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;
  %69 = %68.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %70 = nn.relu(%69) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %71 = nn.conv2d(%70, %stage4_unit1_conv2_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %72 = nn.conv2d(%66, %stage4_unit1_sc_weight, strides=[2, 2], padding=[0, 0, 0, 0], channels=512, kernel_size=[1, 1]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %73 = add(%71, %72) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %74 = nn.batch_norm(%73, %stage4_unit2_bn1_gamma, %stage4_unit2_bn1_beta, %stage4_unit2_bn1_moving_mean, %stage4_unit2_bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;
  %75 = %74.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %76 = nn.relu(%75) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %77 = nn.conv2d(%76, %stage4_unit2_conv1_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %78 = nn.batch_norm(%77, %stage4_unit2_bn2_gamma, %stage4_unit2_bn2_beta, %stage4_unit2_bn2_moving_mean, %stage4_unit2_bn2_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;
  %79 = %78.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %80 = nn.relu(%79) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %81 = nn.conv2d(%80, %stage4_unit2_conv2_weight, padding=[1, 1, 1, 1], channels=512, kernel_size=[3, 3]) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %82 = add(%81, %73) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %83 = nn.batch_norm(%82, %bn1_gamma, %bn1_beta, %bn1_moving_mean, %bn1_moving_var, epsilon=2e-05f) /* ty=(Tensor[(1, 512, 7, 7), float32], Tensor[(512), float32], Tensor[(512), float32]) */;
  %84 = %83.0 /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %85 = nn.relu(%84) /* ty=Tensor[(1, 512, 7, 7), float32] */;
  %86 = nn.global_avg_pool2d(%85) /* ty=Tensor[(1, 512, 1, 1), float32] */;
  %87 = nn.batch_flatten(%86) /* ty=Tensor[(1, 512), float32] */;
  %88 = nn.dense(%87, %fc1_weight, units=1000) /* ty=Tensor[(1, 1000), float32] */;
  %89 = nn.bias_add(%88, %fc1_bias, axis=-1) /* ty=Tensor[(1, 1000), float32] */;
  nn.softmax(%89) /* ty=Tensor[(1, 1000), float32] */
}

使用 Instrument 創(chuàng)建 PassContext?

要用 instrument 運(yùn)行所有 Pass,將其通過(guò)參數(shù)?instruments?傳遞給構(gòu)造函數(shù)?PassContextPassTimingInstrument?用于分析每個(gè) Pass 執(zhí)行時(shí)間的內(nèi)置函數(shù)。

timing_inst = PassTimingInstrument()
with tvm.transform.PassContext(instruments=[timing_inst]):
    relay_mod = relay.transform.InferType()(relay_mod)
    relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
    # 在退出上下文之前,獲取配置文件結(jié)果。
    profiles = timing_inst.render()
print("Printing results of timing profile...")
print(profiles)

輸出結(jié)果:

Printing results of timing profile...
InferType: 6628us [6628us] (46.29%; 46.29%)
FoldScaleAxis: 7691us [6us] (53.71%; 53.71%)
        FoldConstant: 7685us [1578us] (53.67%; 99.92%)
                InferType: 6107us [6107us] (42.65%; 79.47%)

將當(dāng)前 PassContext 與 Instrument 一起使用?

也可以使用當(dāng)前的?PassContext,并通過(guò)?override_instruments?方法注冊(cè)?PassInstrument?實(shí)例。注意,如果已經(jīng)存在了任何 instrument,override_instruments?將執(zhí)行?exit_pass_ctx?方法。然后它切換到新 instrument,并調(diào)用新 instrument 的?enter_pass_ctx?方法。有關(guān)這些方法,參閱以下部分和?tvm.instrument.pass_instrument()

cur_pass_ctx = tvm.transform.PassContext.current()
cur_pass_ctx.override_instruments([timing_inst])
relay_mod = relay.transform.InferType()(relay_mod)
relay_mod = relay.transform.FoldScaleAxis()(relay_mod)
profiles = timing_inst.render()
print("Printing results of timing profile...")
print(profiles)

輸出結(jié)果:

Printing results of timing profile...
InferType: 6131us [6131us] (44.86%; 44.86%)
FoldScaleAxis: 7536us [4us] (55.14%; 55.14%)
        FoldConstant: 7532us [1549us] (55.11%; 99.94%)
                InferType: 5982us [5982us] (43.77%; 79.43%)

注冊(cè)空列表以清除現(xiàn)有 instrument。

注意,PassTimingInstrument?的?exit_pass_ctx?被調(diào)用了。配置文件被清除,因此不會(huì)打印任何內(nèi)容。

cur_pass_ctx.override_instruments([])
# 取消 .render() 的注釋以查看如下警告:
# 警告:沒(méi)有 Pass 分析,您是否啟用了 Pass 分析?
# profiles = timing_inst.render()

創(chuàng)建自定義 Instrument 類?

可以使用?tvm.instrument.pass_instrument()?裝飾器創(chuàng)建自定義 instrument 類。

創(chuàng)建一個(gè)工具類(計(jì)算每次 Pass 引起的每個(gè)算子出現(xiàn)次數(shù)的變化)。可以在 Pass 之前和之后查看?op.name?來(lái)找到每個(gè)算子的名稱,從而計(jì)算差異。

@pass_instrument
class RelayCallNodeDiffer:
    def __init__(self):
        self._op_diff = []
        # Pass 可以嵌套。
        # 使用堆棧來(lái)確保得到之前/之后正確的 pairs。
        self._op_cnt_before_stack = []

    def enter_pass_ctx(self):
        self._op_diff = []
        self._op_cnt_before_stack = []

    def exit_pass_ctx(self):
        assert len(self._op_cnt_before_stack) == 0, "The stack is not empty. Something wrong."

    def run_before_pass(self, mod, info):
        self._op_cnt_before_stack.append((info.name, self._count_nodes(mod)))

    def run_after_pass(self, mod, info):
        # 彈出最新記錄的 Pass。
        name_before, op_to_cnt_before = self._op_cnt_before_stack.pop()
        assert name_before == info.name, "name_before: {}, info.name: {} doesn't match".format(
            name_before, info.name
        )
        cur_depth = len(self._op_cnt_before_stack)
        op_to_cnt_after = self._count_nodes(mod)
        op_diff = self._diff(op_to_cnt_after, op_to_cnt_before)
        # 只記導(dǎo)致差異的 Pass。
        if op_diff:
            self._op_diff.append((cur_depth, info.name, op_diff))

    def get_pass_to_op_diff(self):
        """
        return [
          (depth, pass_name, {op_name: diff_num, ...}), ...
        ]
        """
        return self._op_diff

    @staticmethod
    def _count_nodes(mod):
        """Count the number of occurrences of each operator in the module"""
        ret = {}

        def visit(node):
            if isinstance(node, relay.expr.Call):
                if hasattr(node.op, "name"):
                    op_name = node.op.name
                else:
                    # 某些 CallNode 可能沒(méi)有“名稱”,例如 relay.Function
                    return
                ret[op_name] = ret.get(op_name, 0) + 1

        relay.analysis.post_order_visit(mod["main"], visit)
        return ret

    @staticmethod
    def _diff(d_after, d_before):
        """Calculate the difference of two dictionary along their keys.
        The result is values in d_after minus values in d_before.
        """
        ret = {}
        key_after, key_before = set(d_after), set(d_before)
        for k in key_before & key_after:
            tmp = d_after[k] - d_before[k]
            if tmp:
                ret[k] = d_after[k] - d_before[k]
        for k in key_after - key_before:
            ret[k] = d_after[k]
        for k in key_before - key_after:
            ret[k] = -d_before[k]
        return ret

應(yīng)用 Pass 和多個(gè) Instrument 類?

可以在?PassContext?中使用多個(gè) instrument 類。但注意,instrument 方法是按?instruments?參數(shù)的順序執(zhí)行的,所以對(duì)于像?PassTimingInstrument?這樣的 instrument 類,不可避免地要將其他 instrument 類的執(zhí)行時(shí)間計(jì)入最終的分析結(jié)果。

call_node_inst = RelayCallNodeDiffer()
desired_layouts = {
    "nn.conv2d": ["NHWC", "HWIO"],
}
pass_seq = tvm.transform.Sequential(
    [
        relay.transform.FoldConstant(),
        relay.transform.ConvertLayout(desired_layouts),
        relay.transform.FoldConstant(),
    ]
)
relay_mod["main"] = bind_params_by_name(relay_mod["main"], relay_params)
# timing_inst 放在 call_node_inst 之后。
# 所以 `call_node.inst.run_after_pass()` 的執(zhí)行時(shí)間也算在內(nèi)。
with tvm.transform.PassContext(opt_level=3, instruments=[call_node_inst, timing_inst]):
    relay_mod = pass_seq(relay_mod)
    profiles = timing_inst.render()
# 取消注釋下一行以查看時(shí)序配置文件結(jié)果。
# print(profiles)

輸出結(jié)果:

/workspace/python/tvm/driver/build_module.py:268: UserWarning: target_host parameter is going to be deprecated. Please pass in tvm.target.Target(target, host=target_host) instead.
  "target_host parameter is going to be deprecated. "

可以看到每個(gè)操作類型增加/減少了多少 CallNode。

from pprint import pprint

print("Printing the change in number of occurrences of each operator caused by each pass...")
pprint(call_node_inst.get_pass_to_op_diff())

輸出結(jié)果:

Printing the change in number of occurrences of each operator caused by each pass...
[(1, 'CanonicalizeOps', {'add': 1, 'nn.bias_add': -1}),
 (1, 'ConvertLayout', {'expand_dims': 1, 'layout_transform': 23}),
 (1, 'FoldConstant', {'expand_dims': -1, 'layout_transform': -21}),
 (0, 'sequential', {'add': 1, 'layout_transform': 2, 'nn.bias_add': -1})]

異常處理?

以下演示了?PassInstrument?的方法發(fā)生異常的詳細(xì)情況。

定義在進(jìn)入/退出?PassContext?中引發(fā)異常的?PassInstrument?類:

class PassExampleBase:
    def __init__(self, name):
        self._name = name

    def enter_pass_ctx(self):
        print(self._name, "enter_pass_ctx")

    def exit_pass_ctx(self):
        print(self._name, "exit_pass_ctx")

    def should_run(self, mod, info):
        print(self._name, "should_run")
        return True

    def run_before_pass(self, mod, pass_info):
        print(self._name, "run_before_pass")

    def run_after_pass(self, mod, pass_info):
        print(self._name, "run_after_pass")

@pass_instrument
class PassFine(PassExampleBase):
    pass

@pass_instrument
class PassBadEnterCtx(PassExampleBase):
    def enter_pass_ctx(self):
        print(self._name, "bad enter_pass_ctx!!!")
        raise ValueError("{} bad enter_pass_ctx".format(self._name))

@pass_instrument
class PassBadExitCtx(PassExampleBase):
    def exit_pass_ctx(self):
        print(self._name, "bad exit_pass_ctx!!!")
        raise ValueError("{} bad exit_pass_ctx".format(self._name))

若?enter_pass_ctx?發(fā)生異常,PassContext?將禁用 pass instrumentation。它將運(yùn)行每個(gè)成功完成?enter_pass_ctx?的 PassInstrument 的?exit_pass_ctx

下面的例子可以看到?PassFine_0?的?exit_pass_ctx?在異常后執(zhí)行。

demo_ctx = tvm.transform.PassContext(
    instruments=[
        PassFine("PassFine_0"),
        PassBadEnterCtx("PassBadEnterCtx"),
        PassFine("PassFine_1"),
    ]
)
try:
    with demo_ctx:
        relay_mod = relay.transform.InferType()(relay_mod)
except ValueError as ex:
    print("Catching", str(ex).split("\n")[-1])

輸出結(jié)果:

PassFine_0 enter_pass_ctx
PassBadEnterCtx bad enter_pass_ctx!!!
PassFine_0 exit_pass_ctx
Catching ValueError: PassBadEnterCtx bad enter_pass_ctx

PassInstrument?實(shí)例中的異常會(huì)導(dǎo)致當(dāng)前的?PassContext?所有 instrument 被清除,因此調(diào)用?override_instruments?時(shí)不會(huì)打印任何內(nèi)容。

demo_ctx.override_instruments([])  # 沒(méi)有打印 PassFine_0 exit_pass_ctx....等

若?exit_pass_ctx?發(fā)生異常,則禁用 pass instrument,然后傳播異常。這意味著?PassInstrument?在拋出異常之后注冊(cè)的實(shí)例不會(huì)執(zhí)行?exit_pass_ctx

demo_ctx = tvm.transform.PassContext(
    instruments=[
        PassFine("PassFine_0"),
        PassBadExitCtx("PassBadExitCtx"),
        PassFine("PassFine_1"),
    ]
)
try:
    # PassFine_1 執(zhí)行 enter_pass_ctx,但不執(zhí)行 exit_pass_ctx。
    with demo_ctx:
        relay_mod = relay.transform.InferType()(relay_mod)
except ValueError as ex:
    print("Catching", str(ex).split("\n")[-1])

輸出結(jié)果:

PassFine_0 enter_pass_ctx
PassBadExitCtx enter_pass_ctx
PassFine_1 enter_pass_ctx
PassFine_0 should_run
PassBadExitCtx should_run
PassFine_1 should_run
PassFine_0 run_before_pass
PassBadExitCtx run_before_pass
PassFine_1 run_before_pass
PassFine_0 run_after_pass
PassBadExitCtx run_after_pass
PassFine_1 run_after_pass
PassFine_0 exit_pass_ctx
PassBadExitCtx bad exit_pass_ctx!!!
Catching ValueError: PassBadExitCtx bad exit_pass_ctx

以?run_before_pass為例:

should_runrun_before_pass?和?run_after_pass?發(fā)生的異常沒(méi)有明確處理,用上下文管理器(with?語(yǔ)法)安全退出?PassContext

@pass_instrument
class PassBadRunBefore(PassExampleBase):
    def run_before_pass(self, mod, pass_info):
        print(self._name, "bad run_before_pass!!!")
        raise ValueError("{} bad run_before_pass".format(self._name))

demo_ctx = tvm.transform.PassContext(
    instruments=[
        PassFine("PassFine_0"),
        PassBadRunBefore("PassBadRunBefore"),
        PassFine("PassFine_1"),
    ]
)
try:
    # 所有的 exit_pass_ctx 都會(huì)被調(diào)用。
    with demo_ctx:
        relay_mod = relay.transform.InferType()(relay_mod)
except ValueError as ex:
    print("Catching", str(ex).split("\n")[-1])

輸出結(jié)果:

PassFine_0 enter_pass_ctx
PassBadRunBefore enter_pass_ctx
PassFine_1 enter_pass_ctx
PassFine_0 should_run
PassBadRunBefore should_run
PassFine_1 should_run
PassFine_0 run_before_pass
PassBadRunBefore bad run_before_pass!!!
PassFine_0 exit_pass_ctx
PassBadRunBefore exit_pass_ctx
PassFine_1 exit_pass_ctx
Catching ValueError: PassBadRunBefore bad run_before_pass

注意,pass instrumentation 未禁用。所以若調(diào)用?override_instrumentsexit_pass_ctx?先前注冊(cè)的?PassInstrument?將被調(diào)用。

demo_ctx.override_instruments([])

輸出結(jié)果:

PassFine_0 exit_pass_ctx
PassBadRunBefore exit_pass_ctx
PassFine_1 exit_pass_ctx

若不用?with?語(yǔ)法包裝 pass 執(zhí)行,則不會(huì)調(diào)用?exit_pass_ctx。用當(dāng)前的?PassContext

cur_pass_ctx = tvm.transform.PassContext.current()
cur_pass_ctx.override_instruments(
    [
        PassFine("PassFine_0"),
        PassBadRunBefore("PassBadRunBefore"),
        PassFine("PassFine_1"),
    ]
)

輸出結(jié)果:

PassFine_0 enter_pass_ctx
PassBadRunBefore enter_pass_ctx
PassFine_1 enter_pass_ctx

然后調(diào)用 Pass。異常后?exit_pass_ctx?不執(zhí)行。

try:
    # No ``exit_pass_ctx`` got executed.
    relay_mod = relay.transform.InferType()(relay_mod)
except ValueError as ex:
    print("Catching", str(ex).split("\n")[-1])

輸出結(jié)果:

PassFine_0 should_run
PassBadRunBefore should_run
PassFine_1 should_run
PassFine_0 run_before_pass
PassBadRunBefore bad run_before_pass!!!
Catching ValueError: PassBadRunBefore bad run_before_pass

清除 instrument。

cur_pass_ctx.override_instruments([])

輸出結(jié)果:

PassFine_0 exit_pass_ctx
PassBadRunBefore exit_pass_ctx
PassFine_1 exit_pass_ctx

下載 Python 源代碼:use_pass_instrument.py

下載 Jupyter Notebook:use_pass_instrument.ipynb

?著作權(quán)歸作者所有,如需轉(zhuǎn)載,請(qǐng)注明出處,否則將追究法律責(zé)任
收藏
回復(fù)
舉報(bào)
回復(fù)
相關(guān)推薦
社區(qū)精華內(nèi)容

目錄

    亚洲女人18毛片水真多| 久草视频中文在线| 亚洲精品无播放器在线播放| 亚洲人成网站影音先锋播放| 国产精品二区在线观看| 九九热在线免费观看| 欧美精品久久久久久| 日韩欧美国产小视频| 男人揉女人奶房视频60分| a天堂在线资源| 成人综合在线视频| 国产精品视频一| 日本午夜小视频| 色狮一区二区三区四区视频| 亚洲精品久久视频| 五月婷婷之婷婷| 6699嫩草久久久精品影院| 国产精品久久久久久久蜜臀| 激情小说综合区| 国产男女无套免费网站| 亚洲女人av| 欧美福利小视频| 国产精品一区二区亚洲| 亚洲制服一区| 亚洲大胆人体在线| 国产精欧美一区二区三区白种人| 中文字幕在线直播| 亚洲国产毛片aaaaa无费看 | 中文字幕在线一| 99xxxx成人网| 久久久久久久久久亚洲| 顶臀精品视频www| 欧美亚洲国产激情| 亚洲免费精彩视频| 中文字幕人妻一区二区三区| 精品国产乱码一区二区三区| 欧美私人免费视频| 欧美亚洲日本在线观看| 黄视频网站在线观看| 亚洲一区在线观看视频| 日本中文字幕一级片| 91女主播在线观看| 国产婷婷色一区二区三区四区| 久草热久草热线频97精品| 亚洲成人久久精品| 国产精品一二二区| 5566中文字幕一区二区| 国产同性人妖ts口直男| 国模无码大尺度一区二区三区| 国产精品高精视频免费| 怡红院av久久久久久久| 噜噜噜躁狠狠躁狠狠精品视频| 97久久超碰福利国产精品…| 国产在线视频二区| 亚洲国产精品第一区二区| 色综合视频网站| 黄色一级视频免费| 红桃视频国产一区| 久久久这里只有精品视频| 久久一二三四区| 黄色成人在线网址| 97碰碰碰免费色视频| 精品成人久久久| 国产情侣一区| 国产精品国语对白| 国产又黄又粗又长| 国产馆精品极品| 懂色一区二区三区av片| 日韩一级免费毛片| 国产午夜精品一区二区三区视频| 欧美色欧美亚洲另类七区| 春暖花开成人亚洲区| 国产精品久久免费看| 亚洲AV无码成人精品一区| 亚洲91av| 狠狠色狠色综合曰曰| 成人在线激情网| 精品69视频一区二区三区| 欧美丰满少妇xxxbbb| 日本泡妞xxxx免费视频软件| 国产欧美三级电影| 亚洲女人天堂视频| 美国一级黄色录像| 午夜精品久久久久99热蜜桃导演 | 久久视频一区二区| 亚洲国产一区二区三区在线| av片哪里在线观看| 激情亚洲一区二区三区四区| 亚洲精品怡红院| 日韩高清在线观看一区二区| 亚洲韩国青草视频| 五月天精品在线| 亚洲最新色图| 欧美一性一乱一交一视频| 在线观看中文字幕2021| 夫妻av一区二区| 日产精品高清视频免费| gogo在线观看| 91国产精品成人| 精品国产午夜福利在线观看| 久久99蜜桃| 九色精品美女在线| 久久精品99北条麻妃| 国产成人av电影在线观看| 欧美日韩免费观看一区| 成人短视频在线观看| 色综合久久66| 中文字幕乱码在线人视频| 天堂资源在线亚洲| 九九久久久久久久久激情| 亚洲视频 欧美视频| 国产激情偷乱视频一区二区三区| 日本欧美精品久久久| 影音先锋男人资源在线| 欧美日韩视频在线观看一区二区三区| 国产大尺度视频| 911久久香蕉国产线看观看| 51午夜精品视频| 蜜桃91麻豆精品一二三区| 国产精品女主播av| 50路60路老熟妇啪啪| 国产suv精品一区二区四区视频| 国产亚洲综合久久| 日韩三级免费看| 国产成人久久精品77777最新版本| 日本一区二区精品视频| 小视频免费在线观看| 欧美成人女星排名| 国产黄色的视频| 精品一区二区在线视频| 色播亚洲婷婷| 日本电影欧美片| 亚洲国产成人91精品| 日韩女优一区二区| 国模一区二区三区白浆| 一本一道久久a久久精品综合| av一区在线播放| 亚洲天堂av高清| 秋霞精品一区二区三区| 91女人视频在线观看| 国产96在线 | 亚洲| 精品自拍偷拍| 97在线免费观看| 日批免费在线观看| 亚欧色一区w666天堂| 荫蒂被男人添免费视频| 国户精品久久久久久久久久久不卡| 91精品国产综合久久香蕉| 中文字幕在线播放| 欧美性生活一区| 懂色av粉嫩av浪潮av| 美女视频一区在线观看| 新呦u视频一区二区| 久久电影天堂| 一区二区欧美激情| 中文字幕视频二区| 国产精品不卡一区| 亚洲自拍第三页| 欧美黄色aaaa| 国产一区二区精品免费| 女厕盗摄一区二区三区| 亚洲人成五月天| 国产成人av免费| 国产精品国产三级国产普通话99| www.国产福利| 国模吧视频一区| 欧美激情视频一区二区三区| 香蕉久久免费电影| 在线播放国产精品| 92久久精品一区二区| 亚洲综合在线五月| 六十路息与子猛烈交尾| 视频在线观看一区二区三区| 亚洲看片网站| 视频一区国产| 欧美亚州一区二区三区| av网在线观看| 日韩视频中午一区| 日韩一区二区视频在线| 国产精品色噜噜| 久久久久99人妻一区二区三区| 国产精品久久久久久久免费软件 | 亚洲成人二区| 国产欧美综合精品一区二区| 成人亚洲欧美| 久久国产精品影片| 亚洲三级黄色片| 884aa四虎影成人精品一区| 国产精品自拍视频一区| 国产三级一区二区三区| 在线观看欧美一区二区| 亚洲一区一卡| 永久免费在线看片视频| 久久久免费毛片| 国产在线a不卡| 国产在线美女| 久久久精品在线| 欧美黄色小说| 精品久久久久久久久久久久久久久 | 日韩欧美精品免费在线| 国产不卡在线观看视频| 成人综合婷婷国产精品久久| 91视频免费版污| 一区免费在线| japanese在线视频| 久久av资源| 成人欧美一区二区三区视频| 成人不卡视频| 97在线观看免费高清| 50度灰在线| 一区二区欧美亚洲| 日本天堂影院在线视频| 日韩午夜激情免费电影| 亚洲一区二区影视| 日韩欧美在线看| 国产精品变态另类虐交| 亚洲人成影院在线观看| 一级黄色录像毛片| 99re热这里只有精品免费视频| 日本一二三四区视频| 日韩av中文字幕一区二区三区| 国产在线观看欧美| 午夜免费一区| 亚洲国产一区二区精品视频| 一本久久青青| 久久大片网站| 麻豆一区一区三区四区| 亚洲最大的成人网| 五月天色综合| 国产精品日韩在线| 日本成人三级电影| 日本最新高清不卡中文字幕| av最新在线| 国语自产精品视频在线看| 亚洲奶水xxxx哺乳期| 久热在线中文字幕色999舞| 尤物在线视频| 中文一区二区视频| av国产在线观看| 一本色道久久88亚洲综合88| 精品推荐蜜桃传媒| 亚洲欧洲激情在线| 免费在线黄色网址| 亚洲欧美另类中文字幕| 免费人成黄页在线观看忧物| 日韩精品在线免费观看视频| 五月天福利视频| 精品亚洲精品福利线在观看| 色综合成人av| 亚洲午夜未删减在线观看 | 久久丝袜美腿综合| japanese中文字幕| 国产精品乱人伦| 国产性生活大片| 亚洲精品videosex极品| 黄色一级片在线免费观看| 亚洲国产综合在线| 国产一级在线播放| 狠狠干狠狠久久| 奴色虐av一区二区三区| 欧美怡红院视频| 91久久精品国产91性色69| 在线播放一区二区三区| 精品国自产拍在线观看| 欧美精品一区二区在线播放| 婷婷在线观看视频| 国产亚洲激情视频在线| 色网站免费在线观看| 欧美久久久精品| gogo久久| 国产精品成人国产乱一区 | 欧美精品高清视频| av无码精品一区二区三区宅噜噜| 精品精品欲导航| 日韩亚洲视频在线观看| 自拍亚洲一区欧美另类| 丝袜综合欧美| 清纯唯美亚洲激情| 成人在线视频国产| 国产日韩欧美综合精品| 禁断一区二区三区在线| www.-级毛片线天内射视视| 亚洲精品一二| 人人干人人干人人| 懂色av噜噜一区二区三区av| 无码h肉动漫在线观看| 中文字幕中文字幕中文字幕亚洲无线| 久久久久久久九九九九| 在线一区二区三区四区| 精品人妻av一区二区三区| 日韩精品在线看| 26uuu亚洲电影在线观看| 欧美专区中文字幕| 麻豆国产一区二区三区四区| 欧美第一黄网| 欧美区日韩区| 九九热在线免费| caoporm超碰国产精品| 后入内射无码人妻一区| 午夜不卡av在线| 国产美女精品视频国产| 亚洲免费影视第一页| 手机在线免费看av| 国产欧美日韩中文| 亚洲小说图片| 国产精品久久久久久久久电影网| 日韩国产在线一| 大尺度做爰床戏呻吟舒畅| 中文字幕一区二区不卡 | 久久精品导航| 黄色av电影网站| 亚洲视频在线观看一区| 久久久久亚洲视频| 亚洲精品国产品国语在线| 二区三区四区高清视频在线观看| 欧美又大又硬又粗bbbbb| 亚洲2区在线| 操bbb操bbb| 免费av成人在线| 国产免费看av| 午夜a成v人精品| 亚洲国产精品欧美久久| 久久精品国产综合| 久久91超碰青草在哪里看| 日韩成人在线资源| 性欧美精品高清| 人妻无码中文久久久久专区| 一区二区三区不卡视频| 国产精品国产av| 视频在线观看一区二区| 先锋欧美三级| 欧美日韩一区二区三区免费| 国产一区二区精品| 免费a级黄色片| 精品色蜜蜜精品视频在线观看| 欧美 日韩 人妻 高清 中文| 草民午夜欧美限制a级福利片| 日韩第二十一页| 在线视频一区观看| 六月丁香婷婷色狠狠久久| 少妇人妻好深好紧精品无码| 欧洲精品在线观看| 国产对白叫床清晰在线播放| 国产福利精品视频| 国产精品美女久久久久久不卡| 黄色高清无遮挡| 久久久久久亚洲综合影院红桃| 国产无遮挡呻吟娇喘视频| 日韩精品亚洲元码| 无码小电影在线观看网站免费| 久草一区二区| 首页亚洲欧美制服丝腿| 大胸美女被爆操| 欧美日韩黄色影视| 成人在线app| 粉嫩av四季av绯色av第一区| 欧美大片专区| 亚洲精品无码一区二区| 亚洲aⅴ怡春院| 精品福利视频导航大全| 国产精品久久久久久久久免费| 日韩国产一区| 中文字幕第66页| 午夜激情综合网| 你懂的视频在线免费| 国产男人精品视频| 欧美不卡在线| 黄色录像a级片| 精品视频色一区| 手机在线免费看av| 欧美日韩综合网| 久久成人综合网| 国产精品999久久久| 亚洲一区二区精品| 久久久久久亚洲精品美女| 人体内射精一区二区三区| 91女神在线视频| 一级做a爱片久久毛片| 欧美疯狂做受xxxx高潮| 免费久久精品| 在线一区二区不卡| 亚洲v日本v欧美v久久精品| 国产精品秘入口| 2020国产精品久久精品不卡| 亚洲一区欧美二区| 国产喷水在线观看| 亚洲国产精品一区二区三区| 成人福利片在线| 精品丰满人妻无套内射| 欧美激情一区在线| 嫩草影院一区二区| 国产精品久久久久久久久免费| 综合视频在线| 一区二区三区伦理片| 日韩色视频在线观看| 99re久久| 午夜精品久久久久久久无码| 中文字幕佐山爱一区二区免费| 亚洲色欧美另类| av成人午夜|