精品欧美一区二区三区在线观看 _久久久久国色av免费观看性色_国产精品久久在线观看_亚洲第一综合网站_91精品又粗又猛又爽_小泽玛利亚一区二区免费_91亚洲精品国偷拍自产在线观看 _久久精品视频在线播放_美女精品久久久_欧美日韩国产成人在线

通過(guò) Docker-Compose 快速部署 Hive 詳細(xì)教程

大數(shù)據(jù) Hadoop
通過(guò) docker-compose 部署的服務(wù)主要是用最少的資源和時(shí)間成本快速部署服務(wù),方便小伙伴學(xué)習(xí)、測(cè)試、驗(yàn)證功能等等~

一、概述

其實(shí)通過(guò) docker-compose 部署 hive 是在繼上篇文章 Hadoop 部署的基礎(chǔ)之上疊加的,Hive 做為最常用的數(shù)倉(cāng)服務(wù),所以是有必要進(jìn)行集成的,感興趣的小伙伴請(qǐng)認(rèn)真閱讀我以下內(nèi)容,通過(guò) docker-compose 部署的服務(wù)主要是用最少的資源和時(shí)間成本快速部署服務(wù),方便小伙伴學(xué)習(xí)、測(cè)試、驗(yàn)證功能等等~

二、前期準(zhǔn)備

1)部署 docker

# 安裝yum-config-manager配置工具
yum -y install yum-utils

# 建議使用阿里云yum源:(推薦)
#yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安裝docker-ce版本
yum install -y docker-ce
# 啟動(dòng)并開(kāi)機(jī)啟動(dòng)
systemctl enable --now docker
docker --version

2)部署 docker-compose

curl -SL https://github.com/docker/compose/releases/download/v2.16.0/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose

chmod +x /usr/local/bin/docker-compose
docker-compose --version

三、創(chuàng)建網(wǎng)絡(luò)

# 創(chuàng)建,注意不能使用hadoop-network,要不然啟動(dòng)hs2服務(wù)的時(shí)候會(huì)有問(wèn)題!!!
docker network create hadoop-network

# 查看
docker network ls

四、MySQL 部署

1)mysql 鏡像

docker pull  mysql:5.7
docker tag mysql:5.7 registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/mysql:5.7

2)配置

mkdir -p conf/ data/db/

cat >conf/my.cnf<<EOF
[mysqld]
character-set-server=utf8
log-bin=mysql-bin
server-id=1
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
symbolic-links=0
secure_file_priv =
wait_timeout=120
interactive_timeout=120
default-time_zone = '+8:00'
skip-external-locking
skip-name-resolve
open_files_limit = 10240
max_connections = 1000
max_connect_errors = 6000
table_open_cache = 800
max_allowed_packet = 40m
sort_buffer_size = 2M
join_buffer_size = 1M
thread_cache_size = 32
query_cache_size = 64M
transaction_isolation = READ-COMMITTED
tmp_table_size = 128M
max_heap_table_size = 128M
log-bin = mysql-bin
sync-binlog = 1
binlog_format = ROW
binlog_cache_size = 1M
key_buffer_size = 128M
read_buffer_size = 2M
read_rnd_buffer_size = 4M
bulk_insert_buffer_size = 64M
lower_case_table_names = 1
explicit_defaults_for_timestamp=true
skip_name_resolve = ON
event_scheduler = ON
log_bin_trust_function_creators = 1
innodb_buffer_pool_size = 512M
innodb_flush_log_at_trx_commit = 1
innodb_file_per_table = 1
innodb_log_buffer_size = 4M
innodb_log_file_size = 256M
innodb_max_dirty_pages_pct = 90
innodb_read_io_threads = 4
innodb_write_io_threads = 4
EOF

3)編排

version: '3'
services:
db:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/mysql:5.7 #mysql版本
container_name: mysql
hostname: mysql
volumes:
- ./data/db:/var/lib/mysql
- ./conf/my.cnf:/etc/mysql/mysql.conf.d/mysqld.cnf
restart: always
ports:
- 13306:3306
networks:
- hadoop_network
environment:
MYSQL_ROOT_PASSWORD: 123456 #訪問(wèn)密碼
secure_file_priv:
healthcheck:
test: ["CMD-SHELL", "curl -I localhost:3306 || exit 1"]
interval: 10s
timeout: 5s
retries: 3

# 連接外部網(wǎng)絡(luò)
networks:
hadoop_network:
external: true

4)部署 mysql

docker-compose -f mysql-compose.yaml up -d
docker-compose -f mysql-compose.yaml ps

# 登錄容器
mysql -uroot -p123456

圖片

四、Hive 部署

1)下載 hive

下載地址:http://archive.apache.org/dist/hive

# 下載
wget http://archive.apache.org/dist/hive/hive-3.1.3/apache-hive-3.1.3-bin.tar.gz

# 解壓
tar -zxvf apache-hive-3.1.3-bin.tar.gz

2)配置

images/hive-config/hive-site.xml

<?xml versinotallow="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<!-- 配置hdfs存儲(chǔ)目錄 -->
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive_remote/warehouse</value>
</property>

<property>
<name>hive.metastore.local</name>
<value>false</value>
</property>

<!-- 所連接的 MySQL 數(shù)據(jù)庫(kù)的地址,hive_local是數(shù)據(jù)庫(kù),程序會(huì)自動(dòng)創(chuàng)建,自定義就行 -->
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://mysql:3306/hive_metastore?createDatabaseIfNotExist=true&useSSL=false&serverTimeznotallow=Asia/Shanghai</value>
</property>

<!-- MySQL 驅(qū)動(dòng) -->
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<!--<value>com.mysql.cj.jdbc.Driver</value>-->
<value>com.mysql.jdbc.Driver</value>
</property>

<!-- mysql連接用戶 -->
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>

<!-- mysql連接密碼 -->
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>

<!--元數(shù)據(jù)是否校驗(yàn)-->
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>

<property>
<name>system:user.name</name>
<value>root</value>
<description>user name</description>
</property>

<property>
<name>hive.metastore.uris</name>
<value>thrift://hive-metastore:9083</value>
</property>

<!-- host -->
<property>
<name>hive.server2.thrift.bind.host</name>
<value>0.0.0.0</value>
<description>Bind host on which to run the HiveServer2 Thrift service.</description>
</property>

<!-- hs2端口 默認(rèn)是10000-->
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>

<property>
<name>hive.server2.active.passive.ha.enable</name>
<value>true</value>
</property>

</configuration>

3)啟動(dòng)腳本

#!/usr/bin/env sh


wait_for() {
echo Waiting for $1 to listen on $2...
while ! nc -z $1 $2; do echo waiting...; sleep 1s; done
}

start_hdfs_namenode() {

if [ ! -f /tmp/namenode-formated ];then
${HADOOP_HOME}/bin/hdfs namenode -format >/tmp/namenode-formated
fi

${HADOOP_HOME}/bin/hdfs --loglevel INFO --daemon start namenode

tail -f ${HADOOP_HOME}/logs/*namenode*.log
}

start_hdfs_datanode() {

wait_for $1 $2

${HADOOP_HOME}/bin/hdfs --loglevel INFO --daemon start datanode

tail -f ${HADOOP_HOME}/logs/*datanode*.log
}

start_yarn_resourcemanager() {

${HADOOP_HOME}/bin/yarn --loglevel INFO --daemon start resourcemanager

tail -f ${HADOOP_HOME}/logs/*resourcemanager*.log
}

start_yarn_nodemanager() {

wait_for $1 $2

${HADOOP_HOME}/bin/yarn --loglevel INFO --daemon start nodemanager

tail -f ${HADOOP_HOME}/logs/*nodemanager*.log
}

start_yarn_proxyserver() {

wait_for $1 $2

${HADOOP_HOME}/bin/yarn --loglevel INFO --daemon start proxyserver

tail -f ${HADOOP_HOME}/logs/*proxyserver*.log
}

start_mr_historyserver() {

wait_for $1 $2

${HADOOP_HOME}/bin/mapred --loglevel INFO --daemon start historyserver

tail -f ${HADOOP_HOME}/logs/*historyserver*.log
}

start_hive_metastore() {

if [ ! -f ${HIVE_HOME}/formated ];then
schematool -initSchema -dbType mysql --verbose > ${HIVE_HOME}/formated
fi

$HIVE_HOME/bin/hive --service metastore

}

start_hive_hiveserver2() {

$HIVE_HOME/bin/hive --service hiveserver2
}


case $1 in
hadoop-hdfs-nn)
start_hdfs_namenode
;;
hadoop-hdfs-dn)
start_hdfs_datanode $2 $3
;;
hadoop-yarn-rm)
start_yarn_resourcemanager
;;
hadoop-yarn-nm)
start_yarn_nodemanager $2 $3
;;
hadoop-yarn-proxyserver)
start_yarn_proxyserver $2 $3
;;
hadoop-mr-historyserver)
start_mr_historyserver $2 $3
;;
hive-metastore)
start_hive_metastore $2 $3
;;
hive-hiveserver2)
start_hive_hiveserver2 $2 $3
;;
*)
echo "請(qǐng)輸入正確的服務(wù)啟動(dòng)命令~"
;;
esac

4)構(gòu)建鏡像 Dockerfile

FROM registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop:v1

COPY hive-config/* ${HIVE_HOME}/conf/

COPY bootstrap.sh /opt/apache/

COPY mysql-connector-java-5.1.49/mysql-connector-java-5.1.49-bin.jar ${HIVE_HOME}/lib/

RUN sudo mkdir -p /home/hadoop/ && sudo chown -R hadoop:hadoop /home/hadoop/

#RUN yum -y install which

開(kāi)始構(gòu)建鏡像

# 構(gòu)建鏡像
docker build -t registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1 . --no-cache

# 推送鏡像(可選)
docker push registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1

### 參數(shù)解釋
# -t:指定鏡像名稱
# . :當(dāng)前目錄Dockerfile
# -f:指定Dockerfile路徑
# --no-cache:不緩存

5)編排

version: '3'
services:
hadoop-hdfs-nn:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-hdfs-nn
hostname: hadoop-hdfs-nn
restart: always
privileged: true
env_file:
- .env
ports:
- "30070:${HADOOP_HDFS_NN_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-hdfs-nn"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:${HADOOP_HDFS_NN_PORT} || exit 1"]
interval: 20s
timeout: 20s
retries: 3
hadoop-hdfs-dn-0:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-hdfs-dn-0
hostname: hadoop-hdfs-dn-0
restart: always
depends_on:
- hadoop-hdfs-nn
env_file:
- .env
ports:
- "30864:${HADOOP_HDFS_DN_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-hdfs-dn hadoop-hdfs-nn ${HADOOP_HDFS_NN_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:${HADOOP_HDFS_DN_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 3
hadoop-hdfs-dn-1:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-hdfs-dn-1
hostname: hadoop-hdfs-dn-1
restart: always
depends_on:
- hadoop-hdfs-nn
env_file:
- .env
ports:
- "30865:${HADOOP_HDFS_DN_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-hdfs-dn hadoop-hdfs-nn ${HADOOP_HDFS_NN_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:${HADOOP_HDFS_DN_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 3
hadoop-hdfs-dn-2:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-hdfs-dn-2
hostname: hadoop-hdfs-dn-2
restart: always
depends_on:
- hadoop-hdfs-nn
env_file:
- .env
ports:
- "30866:${HADOOP_HDFS_DN_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-hdfs-dn hadoop-hdfs-nn ${HADOOP_HDFS_NN_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:${HADOOP_HDFS_DN_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 3
hadoop-yarn-rm:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-yarn-rm
hostname: hadoop-yarn-rm
restart: always
env_file:
- .env
ports:
- "30888:${HADOOP_YARN_RM_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-yarn-rm"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "netstat -tnlp|grep :${HADOOP_YARN_RM_PORT} || exit 1"]
interval: 20s
timeout: 20s
retries: 3
hadoop-yarn-nm-0:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-yarn-nm-0
hostname: hadoop-yarn-nm-0
restart: always
depends_on:
- hadoop-yarn-rm
env_file:
- .env
ports:
- "30042:${HADOOP_YARN_NM_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-yarn-nm hadoop-yarn-rm ${HADOOP_YARN_RM_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:${HADOOP_YARN_NM_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 3
hadoop-yarn-nm-1:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-yarn-nm-1
hostname: hadoop-yarn-nm-1
restart: always
depends_on:
- hadoop-yarn-rm
env_file:
- .env
ports:
- "30043:${HADOOP_YARN_NM_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-yarn-nm hadoop-yarn-rm ${HADOOP_YARN_RM_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:${HADOOP_YARN_NM_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 3
hadoop-yarn-nm-2:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-yarn-nm-2
hostname: hadoop-yarn-nm-2
restart: always
depends_on:
- hadoop-yarn-rm
env_file:
- .env
ports:
- "30044:${HADOOP_YARN_NM_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-yarn-nm hadoop-yarn-rm ${HADOOP_YARN_RM_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "curl --fail http://localhost:${HADOOP_YARN_NM_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 3
hadoop-yarn-proxyserver:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-yarn-proxyserver
hostname: hadoop-yarn-proxyserver
restart: always
depends_on:
- hadoop-yarn-rm
env_file:
- .env
ports:
- "30911:${HADOOP_YARN_PROXYSERVER_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-yarn-proxyserver hadoop-yarn-rm ${HADOOP_YARN_RM_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "netstat -tnlp|grep :${HADOOP_YARN_PROXYSERVER_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 3
hadoop-mr-historyserver:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hadoop-mr-historyserver
hostname: hadoop-mr-historyserver
restart: always
depends_on:
- hadoop-yarn-rm
env_file:
- .env
ports:
- "31988:${HADOOP_MR_HISTORYSERVER_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hadoop-mr-historyserver hadoop-yarn-rm ${HADOOP_YARN_RM_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "netstat -tnlp|grep :${HADOOP_MR_HISTORYSERVER_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 3
hive-metastore:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hive-metastore
hostname: hive-metastore
restart: always
depends_on:
- hadoop-hdfs-dn-2
env_file:
- .env
ports:
- "30983:${HIVE_METASTORE_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hive-metastore hadoop-hdfs-dn-2 ${HADOOP_HDFS_DN_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "netstat -tnlp|grep :${HIVE_METASTORE_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 5
hive-hiveserver2:
image: registry.cn-hangzhou.aliyuncs.com/bigdata_cloudnative/hadoop_hive:v1
user: "hadoop:hadoop"
container_name: hive-hiveserver2
hostname: hive-hiveserver2
restart: always
depends_on:
- hive-metastore
env_file:
- .env
ports:
- "31000:${HIVE_HIVESERVER2_PORT}"
command: ["sh","-c","/opt/apache/bootstrap.sh hive-hiveserver2 hive-metastore ${HIVE_METASTORE_PORT}"]
networks:
- hadoop-network
healthcheck:
test: ["CMD-SHELL", "netstat -tnlp|grep :${HIVE_HIVESERVER2_PORT} || exit 1"]
interval: 30s
timeout: 30s
retries: 5

# 連接外部網(wǎng)絡(luò)
networks:
hadoop-network:
external: true

6)開(kāi)始部署

docker-compose -f docker-compose.yaml up -d

# 查看
docker-compose -f docker-compose.yaml ps

圖片

簡(jiǎn)單測(cè)試驗(yàn)證

圖片

【問(wèn)題】如果出現(xiàn)以下類似的錯(cuò)誤,是因?yàn)槎啻螁?dòng),之前的數(shù)據(jù)還在,但是datanode的IP是已經(jīng)變了的(宿主機(jī)部署就不會(huì)有這樣的問(wèn)題,因?yàn)樗拗鳈C(jī)的IP是固定的),所以需要刷新節(jié)點(diǎn),當(dāng)然也可清理之前的舊數(shù)據(jù),不推薦清理舊數(shù)據(jù),推薦使用刷新節(jié)點(diǎn)的方式(如果有對(duì)外掛載的情況下,像我這里沒(méi)有對(duì)外掛載,是因?yàn)橹芭f容器還在,下面有幾種解決方案):

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException): Datanode denied communication with namenode because the host is not in the include-list: DatanodeRegistration(172.30.0.12:9866, datanodeUuid=f8188476-4a88-4cd6-836f-769d510929e4, infoPort=9864, infoSecurePort=0, ipcPort=9867, storageInfo=lv=-57;cid=CID-f998d368-222c-4a9a-88a5-85497a82dcac;nsid=1840040096;c=1680661390829)

圖片

【解決方案】

  1. 刪除舊容器重啟啟動(dòng)
# 清理舊容器
docker rm `docker ps -a|grep 'Exited'|awk '{print $1}'`

# 重啟啟動(dòng)服務(wù)
docker-compose -f docker-compose.yaml up -d

# 查看
docker-compose -f docker-compose.yaml ps
  1. 登錄 namenode 刷新 datanode
docker exec -it hadoop-hdfs-nn hdfs dfsadmin -refreshNodes
  1. 登錄 任意節(jié)點(diǎn)刷新 datanode
# 這里以 hadoop-hdfs-dn-0 為例
docker exec -it hadoop-hdfs-dn-0 hdfs dfsadmin -fs hdfs://hadoop-hdfs-nn:9000 -refreshNodes

責(zé)任編輯:武曉燕 來(lái)源: 大數(shù)據(jù)與云原生技術(shù)分享
相關(guān)推薦

2023-11-27 00:18:38

2023-05-29 07:39:49

2023-06-26 00:07:14

2023-10-23 00:06:29

2023-05-14 23:30:38

PrestoHadoop函數(shù)

2023-09-08 08:14:14

2024-03-26 00:00:01

2022-11-19 09:30:31

開(kāi)源容器

2019-09-17 08:00:24

DockerCompose命令

2023-03-26 09:08:36

2025-05-22 10:00:00

DockerRedis容器

2023-10-10 13:49:00

Docker容器

2025-04-14 08:00:00

Docker命令運(yùn)維

2017-05-23 15:53:52

docker服務(wù)容器

2025-04-10 08:35:00

容器編排Docker容器化

2023-10-10 00:09:14

2014-12-26 10:06:48

Docker容器代碼部署

2024-10-28 15:40:26

2023-09-26 01:07:34

2023-09-27 06:26:07

點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號(hào)

久久精品72免费观看| 91国内揄拍国内精品对白| 欧美一级成年大片在线观看| 亚洲天堂av线| 精品无码黑人又粗又大又长| 国产福利亚洲| 91原创在线视频| 欧美精品在线视频观看| 亚洲xxxx2d动漫1| 奇米影视888狠狠狠777不卡| 色悠久久久久综合先锋影音下载| 国产精品成人网| 国产精品一区二区久久| 黄色工厂在线观看| 美女搞黄视频在线观看| proumb性欧美在线观看| 国内精品一区二区三区| 人妻体体内射精一区二区| 日日夜夜精品一区| 捆绑紧缚一区二区三区视频| 欧美激情视频在线观看| 青青青国产在线观看| 黄色成人一级片| 精品成人国产| 亚洲精品国产综合久久| 美女日批免费视频| 综合久久2o19| 成人黄色在线视频| 91国内产香蕉| 激情四射综合网| 欧美片网站免费| 亚洲一区二区在线观看视频 | 久久久久人妻一区精品色欧美| 你懂的视频欧美| 欧洲国内综合视频| 亚洲精品中文字幕乱码三区不卡| 中文 欧美 日韩| 久久人人88| 欧美一区二区播放| 免费一级特黄毛片| 国模精品一区二区| 国产在线视频不卡二| 久久成人精品电影| 女同性恋一区二区三区| 天然素人一区二区视频| 国产精品久久久久aaaa| 欧美中日韩免费视频| 在线观看国产精品入口男同| 在线国产一区| 日韩精品在线免费观看视频| 别急慢慢来1978如如2| av网站网址在线观看| 中文字幕在线观看一区二区| 亚洲免费精品视频| 在线免费黄色| 粉嫩av一区二区三区在线播放| 68精品久久久久久欧美| 日本天堂在线视频| 欧美gayvideo| 日韩在线观看免费| 亚洲天堂成人av| 色吊丝一区二区| 欧美一级专区免费大片| 激情成人在线观看| 校园春色亚洲色图| 亚洲自拍偷拍九九九| 欧美一区二区影视| 伊人免费在线| 亚洲私人黄色宅男| 青娱乐国产91| 成人动漫在线播放| 99久久夜色精品国产网站| 国产精品一区而去| 国产免费叼嘿网站免费| 老鸭窝亚洲一区二区三区| 欧美成人黄色小视频| 久草免费新视频| 美女精品在线| 成人黄色片网站| 手机看片久久久| 欧美午夜一区| 久久精品亚洲94久久精品| 日本黄色特级片| 亚洲成人偷拍| 日韩av在线不卡| 91蝌蚪视频在线| 国产成人精选| 欧美成人一区二区三区在线观看| 男女视频一区二区三区| 日韩美女在线| 欧美性做爰猛烈叫床潮| 欧美日韩国产精品激情在线播放| 色呦呦呦在线观看| 亚洲色图第一区| 欧美男女爱爱视频| 四虎在线精品| 亚洲国产精彩中文乱码av| a级大片免费看| 精品一区毛片| 久久99久久亚洲国产| 四虎永久免费地址| 日韩综合在线| 中文字幕亚洲欧美一区二区三区| 麻豆精品免费视频| 亚洲综合福利| 亚洲日本aⅴ片在线观看香蕉| 国产精品无码一区二区三区免费| 日韩av在线播放网址| 久久久在线观看| 一级黄色免费片| 精品一区二区免费| 国产91精品久久久久久久| 91麻豆视频在线观看| 97精品电影院| 999久久欧美人妻一区二区| 最新超碰在线| 欧洲av在线精品| 在线观看国产免费视频| 日韩黄色网络| 欧美成人免费视频| 中文字幕人妻精品一区| 91在线看国产| 国产精品一色哟哟| av电影在线免费| 精品人伦一区二区三区蜜桃网站| 青青草国产精品视频| 日日夜夜一区| 国产一区二区三区在线观看网站 | 8848成人影院| 精品欧美久久久| 无码av免费精品一区二区三区| 国产厕拍一区| 亚洲欧美一区二区精品久久久| 国产精品无码一区二区三区| 国产精品mv在线观看| 欧美精品激情在线观看| 99热在线只有精品| 99国产精品久| 97视频在线免费| 天堂va在线高清一区| 欧美成人h版在线观看| 国产乱淫片视频| 成人av免费在线播放| 秋霞在线一区二区| 欧美日韩国产观看视频| 欧美成人伊人久久综合网| 青娱乐av在线| 成人精品视频一区二区三区尤物| 2021国产视频| xx欧美视频| 欧美精品 日韩| 天堂www中文在线资源| 在线看片欧美| 国内成+人亚洲| 69久久精品| 欧美日韩一区二区三区不卡| 国产高潮失禁喷水爽到抽搐| 欧美午夜不卡影院在线观看完整版免费| 亚洲综合色av| 成人在线免费公开观看视频| 色菇凉天天综合网| 中文字幕制服丝袜| 日韩精品免费一区二区三区| 国产精品视频资源| 国产人成网在线播放va免费| 欧美日韩在线免费观看| 欧美特级黄色录像| 久久精品国产久精国产| 国产一级大片免费看| 国产成人夜色高潮福利影视| 欧美怡春院一区二区三区| 番号集在线观看| 午夜精品久久久久久久| 五月天视频在线观看| 国产成人ay| 98视频在线噜噜噜国产| 免费一级在线观看| 亚洲成av人综合在线观看| 国产三级国产精品| 青青草成人在线观看| 精品综合在线| 999精品网| 伊人久久久久久久久久久久久 | 亚洲国内精品视频| 日韩中文字幕高清| 91女人视频在线观看| 国产av熟女一区二区三区| 国产精品免费精品自在线观看 | 久久精品三级视频| 裸体一区二区| 中文字幕精品一区日韩| 色8久久影院午夜场| 日韩电影中文 亚洲精品乱码| 欧美日韩 一区二区三区| 亚洲精品国久久99热| 九九九九九伊人| 夜夜嗨av一区二区三区网站四季av| 91丝袜脚交足在线播放| av香蕉成人| 欧美日韩国产电影| 少妇高潮惨叫久久久久| 麻豆国产欧美日韩综合精品二区| 二区在线观看| 久久国产成人精品| 99超碰麻豆| 亚洲精品一区二区三区不卡| 国产精品久久看| 中文字幕第36页| 黄色综合网站| 中文字幕一区二区三区在线乱码 | 国产九色91| 99久久99九九99九九九| 日韩中文字幕在线视频| 天堂在线观看视频| 激情久久av一区av二区av三区| 制服丝袜av在线| 精品一区二区三区av| 无码人妻丰满熟妇区毛片| 欧美日韩播放| 精品综合久久| 大型av综合网站| 51精品国产人成在线观看| 99久久婷婷国产综合精品首页| 久久琪琪电影院| 永久免费网站在线| 久久精品成人动漫| 超碰免费在线| 亚洲人高潮女人毛茸茸| 免费在线黄色网址| 精品爽片免费看久久| 色哟哟中文字幕| 精品国产污污免费网站入口 | 欧美日韩另类一区| 无码人妻熟妇av又粗又大| 国产精品乱人伦中文| 免费一级做a爰片久久毛片潮| 毛片基地黄久久久久久天堂| 国产一级片黄色| 亚洲综合国产| 少妇高潮大叫好爽喷水| 欧美挤奶吃奶水xxxxx| 国产成人福利视频| 黄在线免费观看| www亚洲精品| 菠萝蜜视频国产在线播放| 久久精品国产一区二区三区| 国产精品一区二区三区视频网站| 久久久国产精品免费| 成人黄视频在线观看| 乱亲女秽乱长久久久| 一区二区三区伦理| 久久99久久99精品免观看粉嫩| 污视频免费在线观看| 欧美日韩国产成人高清视频| 女人天堂av在线播放| 亚洲性生活视频| av大片在线观看| 久久久精品2019中文字幕神马| a级网站在线播放| 久久久久久久久亚洲| 午夜影视一区二区三区| 操人视频在线观看欧美| 91精选在线| 91精品国产91久久久久久久久 | 亚洲自拍偷拍一区| 亚洲免费一区三区| 精品不卡在线| 精品国精品国产自在久国产应用| dy888夜精品国产专区| 国产精品chinese在线观看| 精品在线视频一区二区| 成人免费av| 久久综合伊人77777麻豆| 婷婷综合国产| 好看的日韩精品视频在线| 亚洲免费专区| 美国av在线播放| 激情综合电影网| 天天色综合天天色| 国产宾馆实践打屁股91| 五级黄高潮片90分钟视频| 国产精品美女久久久久久久| 久久精品无码人妻| 一区二区三区四区中文字幕| 国产无精乱码一区二区三区| 91黄色免费版| 丰满少妇在线观看bd| 国产一区二区日韩| www欧美xxxx| 欧美疯狂性受xxxxx另类| av日韩亚洲| 亚洲综合一区二区不卡| 黄色不卡一区| av在线免费观看国产| 日本女优在线视频一区二区| 无码人妻精品一区二区三区99不卡| 国产精选一区二区三区| 性鲍视频在线观看| 91丨porny丨国产| 爱爱视频免费在线观看| 亚洲私人黄色宅男| 日韩在线播放中文字幕| 日韩一区二区免费电影| 国产成人三级一区二区在线观看一| 亚洲免费一在线| 日韩三级电影视频| 国产精品无av码在线观看| 女人抽搐喷水高潮国产精品| 一区二区三区一级片| 久久精品日韩欧美| 性色av蜜臀av浪潮av老女人| 日韩一区日韩二区| 99精品久久久久| 亚洲国产精品久久人人爱| 国产精品午夜影院| 一本色道a无线码一区v| 波多野结衣家庭主妇| 精品国产在天天线2019| 男人天堂久久久| 九九九久久久久久| 成人国产精选| 欧美在线一区二区三区四区| 国产欧美综合一区二区三区| 免费裸体美女网站| 99久久精品免费| 国产一级在线免费观看| 欧美一区二区三区爱爱| 五月婷婷在线视频| 国产精品嫩草影院一区二区| 9999精品视频| 亚洲一区尤物| 牛牛国产精品| 免费无码av片在线观看| 日本不卡免费在线视频| 成人精品在线观看视频| 精品欧美国产一区二区三区| 高h放荡受浪受bl| 欧美肥老妇视频| 成人偷拍自拍| 少妇免费毛片久久久久久久久| 亚洲综合中文| 亚洲高清在线不卡| 亚洲免费高清视频在线| 国产精品美女久久久久av爽| 亚洲第一中文字幕| heyzo在线| 精品不卡在线| 久久三级福利| 91人妻一区二区三区| 亚洲欧美视频在线观看| 精品国产亚洲一区二区麻豆| 亚洲欧洲一区二区三区久久| 在线精品亚洲欧美日韩国产| 日本日本精品二区免费| 免费一级片91| 手机在线成人av| 午夜伦理一区二区| 日本a一级在线免费播放| 国产999精品久久久影片官网| 欧美1区2区3| 国产乱子伦精品无码专区| 成人激情免费网站| 日本高清不卡码| 在线观看久久久久久| 超碰国产精品一区二页| 国产美女作爱全过程免费视频| 9l国产精品久久久久麻豆| 欧美日韩a v| 久久精品亚洲热| 荡女精品导航| 宅男噜噜噜66国产免费观看| 国产精品久久久久影院色老大| 99热这里只有精品3| 欧美一级大片在线观看| 日韩中文首页| 稀缺呦国内精品呦| 色婷婷综合久久久| 国产精品剧情| 久久99精品久久久久久久久久| 欧美日韩岛国| 国产又爽又黄无码无遮挡在线观看| 欧美伊人精品成人久久综合97| 菠萝菠萝蜜在线观看| 欧美大香线蕉线伊人久久| 午夜久久黄色| 四虎永久免费影院| 5月丁香婷婷综合| 337p日本欧洲亚洲大胆鲁鲁| 91精品国产高清久久久久久91裸体 | 成人午夜av电影| 在线看的片片片免费| 欧洲在线/亚洲| 天天色天天射天天综合网| 日本精品二区| 粉嫩av亚洲一区二区图片| 手机av免费观看| 欧美精品videosex极品1| 日韩精品第一区| 极品粉嫩小仙女高潮喷水久久 | 国产精品久久中文|