您好,登錄后才能下訂單哦!
本篇內容主要講解“helm的部署和簡單使用”,感興趣的朋友不妨來看看。本文介紹的方法操作簡單快捷,實用性強。下面就讓小編來帶大家學習“helm的部署和簡單使用”吧!
Helm通過軟件打包的形式,支持發布的版本管理和控制,很大程度上簡化了Kubernetes應用部署和管理的復雜性。
隨著業務容器化與向微服務架構轉變,通過分解巨大的單體應用為多個服務的方式,分解了單體應用的復雜性,使每個微服務都可以獨立部署和擴展,實現了敏捷開發和快速迭代和部署。但任何事情都有兩面性,雖然微服務給我們帶來了很多便利,但由于應用被拆分成多個組件,導致服務數量大幅增加,對于Kubernetest編排來說,每個組件有自己的資源文件,并且可以獨立的部署與伸縮,這給采用Kubernetes做應用編排帶來了諸多挑戰:
管理、編輯與更新大量的K8s配置文件
部署一個含有大量配置文件的復雜K8s應用
分享和復用K8s配置和應用
參數化配置模板支持多個環境
管理應用的發布:回滾、diff和查看發布歷史
控制一個部署周期中的某一些環節
發布后的驗證
而Helm恰好可以幫助我們解決上面問題。
Helm把Kubernetes資源(比如deployments、services或 ingress等) 打包到一個chart中,而chart被保存到chart倉庫。通過chart倉庫來存儲和分享chart。Helm使發布可配置,支持發布應用配置的版本管理,簡化了Kubernetes部署應用的版本控制、打包、發布、刪除、更新等操作。
本文簡單介紹了Helm的用途、架構、安裝和使用。
做為Kubernetes的一個包管理工具,Helm具有如下功能:
創建新的chart
chart打包成tgz格式
上傳chart到chart倉庫或從倉庫中下載chart
在Kubernetes集群中安裝或卸載chart
管理用Helm安裝的chart的發布周期
Helm有三個重要概念:
chart:包含了創建Kubernetes的一個應用實例的必要信息
config:包含了應用發布配置信息
release:是一個chart及其配置的一個運行實例
Helm有以下兩個組成部分:
Helm Client是用戶命令行工具,其主要負責如下:
本地chart開發
倉庫管理
與Tiller sever交互
發送預安裝的chart
查詢release信息
要求升級或卸載已存在的release
Tiller server是一個部署在Kubernetes集群內部的server,其與Helm client、Kubernetes API server進行交互,主要負責如下:
監聽來自Helm client的請求
通過chart及其配置構建一次發布
安裝chart到Kubernetes集群,并跟蹤隨后的發布
通過與Kubernetes交互升級或卸載chart
簡單的說,client管理charts,而server管理發布release。
Helm client
Helm client采用go語言編寫,采用gRPC
協議與Tiller server交互。
Helm server
Tiller server也同樣采用go語言編寫,提供了gRPC server與helm client進行交互,利用Kubernetes client 庫與Kubernetes進行通信,當前庫使用了REST+JSON
格式。
Tiller server 沒有自己的數據庫,目前使用Kubernetes的ConfigMaps
存儲相關信息
說明:配置文件盡可能使用YAM格式
如果與我的情況不同,請閱讀官方的quick guide,了安裝的核心流程和多種情況。
Helm Release地址
kubernetes集群
了解kubernetes的Context安全機制
下載helm的安裝包wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
我的環境使用了RBAC(Role-Based Access Control )
的授權方式,需要先配置ServiceAccount和規則,然后再安裝helm。官方配置參考Role-based Access Control文檔。
權限管理yml:
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
cluster-admin
是kubernetes默認創建的角色。不需要重新定義。
安裝helm:
$ kubectl create -f rbac-config.yaml serviceaccount "tiller" created clusterrolebinding "tiller" created $ helm init --service-account tiller
運行結果:
$HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
實驗環境建議使用本方式安裝,然后安裝ingress-nginx等系統組件。
配置helm 安裝在helm-system
namespace,允許Tiller發布應用到kube-public
namespace。
創建helm-system
namespace,使用命令kubectl create namespace helm-system
定義ServiceAccount
--- kind: ServiceAccount apiVersion: v1 metadata: name: tiller namespace: helm-system
創建一個Role,擁有namespace kube-public
的所有權限。將Tiller的ServiceAccount綁定到這個角色上,允許Tiller 管理kube-public
namespace 所有的資源。
--- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-manager namespace: kube-public rules: - apiGroups: ["", "extensions", "apps"] resources: ["*"] verbs: ["*"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-binding namespace: kube-public subjects: - kind: ServiceAccount name: tiller namespace: helm-system roleRef: kind: Role name: tiller-manager apiGroup: rbac.authorization.k8s.io
Helm中的Release信息存儲在Tiller安裝的namespace中的ConfigMap
,即helm-system
,需要允許Tiller操作helm-system
的ConfigMap
。所以創建Role helm-system.tiller-manager
,并綁定到ServiceAccounthelm-system.tiller
--- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: helm-system name: tiller-manager rules: - apiGroups: ["", "extensions", "apps"] resources: ["configmaps"] verbs: ["*"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: tiller-binding namespace: helm-system subjects: - kind: ServiceAccount name: tiller namespace: helm-system roleRef: kind: Role name: tiller-manager apiGroup: rbac.authorization.k8s.io
使用命令`helm init --service-account tiller --tiller-namespace helm-system`安裝helm。
helm init
參數說明:
--service-account
:指定helm Tiller的ServiceAccount,對于啟用了kubernetesRBAC的集群適用。
--tiller-namespace
:將helm 安裝到指定的namespace中;
--tiller-image
:指定helm鏡像
--kube-context
:將helm Tiller安裝到特定的kubernetes集群中;
第一次運行出現問題:
[root@kuber24 helm]# helm init --service-account tiller --tiller-namespace helm-system Creating /root/.helm Creating /root/.helm/repository Creating /root/.helm/repository/cache Creating /root/.helm/repository/local Creating /root/.helm/plugins Creating /root/.helm/starters Creating /root/.helm/cache/archive Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: EOF
這個是由于google的都被墻了,修改Hosts,指定storage.googleapis.com
對應的課訪問的IP即可。最新的國內可訪問google的Hosts配置見github項目googlehosts/hosts的hosts/hosts-files/hosts文件。
再次運行init helm命令,成功安裝。
[root@kuber24 helm]# helm init --service-account tiller --tiller-namespace helm-system Creating /root/.helm/repository/repositories.yaml Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com Adding local repo with URL: http://127.0.0.1:8879/charts $HELM_HOME has been configured at /root/.helm. Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. To prevent this, run `helm init` with the --tiller-tls-verify flag. For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation Happy Helming!
查看Tiller的Pod狀態時,發現Pod出現錯誤ImagePullBackOff
,如下:
[root@kuber24 resources]# kubectl get pods --all-namespaces|grep tiller helm-system tiller-deploy-cdcd5dcb5-fqm57 0/1 ImagePullBackOff 0 13m
查看pod的詳細信息kubectl describe pod tiller-deploy-cdcd5dcb5-fqm57 -n helm-system
,發現Pod依賴鏡像gcr.io/kubernetes-helm/tiller:v2.11.0
。
查詢docker hub上是否有人復制過改鏡像,如圖:
[root@kuber24 ~]# docker search tiller:v2.11.0 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED docker.io docker.io/jay1991115/tiller gcr.io/kubernetes-helm/tiller:v2.11.0 1 [OK] docker.io docker.io/luyx30/tiller tiller:v2.11.0 1 [OK] docker.io docker.io/1017746640/tiller FROM gcr.io/kubernetes-helm/tiller:v2.11.0 0 [OK] docker.io docker.io/724399396/tiller gcr.io/kubernetes-helm/tiller:v2.11.0-rc.2... 0 [OK] docker.io docker.io/fengzos/tiller gcr.io/kubernetes-helm/tiller:v2.11.0 0 [OK] docker.io docker.io/imwower/tiller tiller from gcr.io/kubernetes-helm/tiller:... 0 [OK] docker.io docker.io/xiaotech/tiller FROM gcr.io/kubernetes-helm/tiller:v2.11.0 0 [OK] docker.io docker.io/yumingc/tiller tiller:v2.11.0 0 [OK] docker.io docker.io/zhangc476/tiller gcr.io/kubernetes-helm/tiller/kubernetes-h... 0 [OK]
同樣使用hub.docker.com
上mirrorgooglecontainers
加速的google鏡像,然后改鏡像的名字。每個Node節點都需要安裝。
鏡像下載不下來:使用他人同步到docker hub上面的鏡像;使用docker search $NAME:$VERSION
使用Hosts翻墻實現。
問題提示:
[root@kuber24 ~]# helm install nginx --tiller-namespace helm-system --namespace kube-public Error: failed to download "nginx" (hint: running `helm repo update` may help)
使用helm repo update
后,并沒有解決問題。
如下:
[root@kuber24 ~]# helm install nginx --tiller-namespace helm-system --namespace kube-public Error: failed to download "nginx" (hint: running `helm repo update` may help) [root@kuber24 ~]# helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ? Happy Helming!? [root@kuber24 ~]# helm install nginx --tiller-namespace helm-system --namespace kube-public Error: failed to download "nginx" (hint: running `helm repo update` may help)
可能的原因:
沒有nginx這個chart:使用helm search nginx
查詢nginx chart信息。
網絡連接問題,下載不了。這種情況下,等待一定超時后,helm會提示。
添加aliyun, github 和官方incubator charts repository。
helm add repo gitlab https://charts.gitlab.io/ helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com/
本小結的$NAME
表示helm的repo/chart_name。
查詢charts: helm search $NAME
查看release的列表:helm ls [--tiller-namespace $TILLER_NAMESPACE]
查詢package 信息: helm inspect $NAME
查詢package支持的選項:helm inspect values $NAME
部署chart:helm install $NAME [--tiller-namespace $TILLER_NAMESPACE] [--namespace $CHART_DEKPLOY_NAMESPACE]
刪除release:helm delete $RELEASE_NAME [--purge] [--tiller-namespace $TILLER_NAMESPACE]
更新:helm upgrade --set $PARAM_NAME=$PARAM_VALUE $RELEASE_NAME $NAME [--tiller-namespace $TILLER_NAMESPACE]
回滾:helm rollback $RELEASE_NAME $REVERSION [--tiller-namespace $TILLER_NAMESPACE]
刪除release時,不使用
--purge
參數,會僅撤銷pod部署,不會刪除release的基本信息,不能release同名的chart。
部署mysql時,查詢參數并配置相應的參數。
查詢可配置的參數:
[root@kuber24 charts]# helm inspect values aliyun/mysql ## mysql image version ## ref: https://hub.docker.com/r/library/mysql/tags/ ## image: "mysql" imageTag: "5.7.14" ## Specify password for root user ## ## Default: random 10 character string # mysqlRootPassword: testing ## Create a database user ## # mysqlUser: # mysqlPassword: ## Allow unauthenticated access, uncomment to enable ## # mysqlAllowEmptyPassword: true ## Create a database ## # mysqlDatabase: ## Specify an imagePullPolicy (Required) ## It's recommended to change this to 'Always' if the image tag is 'latest' ## ref: http://kubernetes.io/docs/user-guide/images/#updating-images ## imagePullPolicy: IfNotPresent livenessProbe: initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 ## Persist data to a persistent volume persistence: enabled: true ## database data Persistent Volume Storage Class ## If defined, storageClassName: <storageClass> ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## # storageClass: "-" accessMode: ReadWriteOnce size: 8Gi ## Configure resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: requests: memory: 256Mi cpu: 100m # Custom mysql configuration files used to override default mysql settings configurationFiles: # mysql.cnf: |- # [mysqld] # skip-name-resolve ## Configure the service ## ref: http://kubernetes.io/docs/user-guide/services/ service: ## Specify a service type ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types type: ClusterIP port: 3306 # nodePort: 32000
例如我們需要配置mysql的root密碼,那么可以直接使用--set
參數設置選項,例如roo密碼設置:--set mysqlRootPassword=hgfgood
。
通過mysql選項的說明中persistence
參數,可以看出mysql 需要持久化存儲,所以需要給kubernetes配置持久化存儲卷PV。
創建PV:
[root@kuber24 resources]# cat local-pv.yml apiVersion: v1 kind: PersistentVolume metadata: name: local-pv namespace: kube-public spec: capacity: storage: 30Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle hostPath: path: /home/k8s
完整的release chart命令如下:helm install --name mysql-dev --set mysqlRootPassword=hgfgood aliyun/mysql --tiller-namespace helm-system --namespace kube-public
。
查看已經release的chart列表:
[root@kuber24 charts]# helm ls --tiller-namespace=helm-system NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE mysql-dev 1 Fri Oct 26 10:35:55 2018 DEPLOYED mysql-0.3.5 kube-public
正常情況下,dashboard監控的情況如下圖:
運行此mysql chart 需要busybox鏡像,偶爾會出現下圖所示的問題,這是docker默認訪問國外的docker hub導致的。需要先下載busybox鏡像。
上例中,安裝完mysql,使用的root密碼為hgfgood
。本例中將其更新為hgf
然后回滾到原始的密碼hgfgood
。
查詢mysql安裝后的密碼:
[root@kuber24 charts]# kubectl get secret --namespace kube-public mysql-dev-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo hgfgood
更新mysql的root密碼,helm upgrade --set mysqlRootPassword=hgf mysql-dev mysql --tiller-namespace helm-system
更新完成后再次查詢mysql的root用戶密碼
[root@kuber24 charts]# kubectl get secret --namespace kube-public mysql-dev-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo hgf
查看RELEASE的信息:
[root@kuber24 charts]# helm ls --tiller-namespace helm-system NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE mysql-dev 2 Fri Oct 26 11:26:48 2018 DEPLOYED mysql-0.3.5 kube-public
查看REVISION
,可以目前mysql-dev
有兩個版本。
回滾到版本1
:
[root@kuber24 charts]# helm rollback mysql-dev 1 --tiller-namespace helm-system Rollback was a success! Happy Helming! [root@kuber24 charts]# kubectl get secret --namespace kube-public mysql-dev-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo hgfgood
通過上述輸出可以發現RELEASE已經回滾。
Error: could not find tiller
,使用helm client,需要與tiller 交互時,需要制定tiller的namespace,使用參數--tiller-namespace helm-system
,此參數默認時kube-system
。
由于網絡問題下載會失敗的問題,例如:
[root@kuber24 ~]# helm install stable/mysql --tiller-namespace helm-system --namespace kube-public --debug [debug] Created tunnel using local port: '32774' [debug] SERVER: "127.0.0.1:32774" [debug] Original chart version: "" Error: Get https://kubernetes-charts.storage.googleapis.com/mysql-0.10.2.tgz: read tcp 10.20.13.24:56594->216.58.221.240:443: read: connection reset by peer
進入本地charts保存的目錄
使用阿里云fetch對應的chart
例如 安裝mysql。
helm fetch aliyun/mysql --untar [root@kuber24 charts]# ls mysql [root@kuber24 charts]# ls mysql/ Chart.yaml README.md templates values.yaml
然后再次運行helm install
安裝mysql chart。
helm install mysql --tiller-namespace helm-system --namespace kube-public
可以使用--debug
參數,打開debug信息。
[root@kuber24 charts]# helm install mysql --tiller-namespace helm-system --namespace kube-public --debug [debug] Created tunnel using local port: '41905' [debug] SERVER: "127.0.0.1:41905" [debug] Original chart version: "" [debug] CHART PATH: /root/Downloads/charts/mysql NAME: kissable-bunny REVISION: 1 RELEASED: Thu Oct 25 20:20:23 2018 CHART: mysql-0.3.5 USER-SUPPLIED VALUES: {} COMPUTED VALUES: configurationFiles: null image: mysql imagePullPolicy: IfNotPresent imageTag: 5.7.14 livenessProbe: failureThreshold: 3 initialDelaySeconds: 30 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 persistence: accessMode: ReadWriteOnce enabled: true size: 8Gi readinessProbe: failureThreshold: 3 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 resources: requests: cpu: 100m memory: 256Mi service: port: 3306 type: ClusterIP HOOKS: MANIFEST: --- # Source: mysql/templates/secrets.yaml apiVersion: v1 kind: Secret metadata: name: kissable-bunny-mysql labels: app: kissable-bunny-mysql chart: "mysql-0.3.5" release: "kissable-bunny" heritage: "Tiller" type: Opaque data: mysql-root-password: "TzU5U2tScHR0Sg==" mysql-password: "RGRXU3Ztb3hQNw==" --- # Source: mysql/templates/pvc.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: kissable-bunny-mysql labels: app: kissable-bunny-mysql chart: "mysql-0.3.5" release: "kissable-bunny" heritage: "Tiller" spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "8Gi" --- # Source: mysql/templates/svc.yaml apiVersion: v1 kind: Service metadata: name: kissable-bunny-mysql labels: app: kissable-bunny-mysql chart: "mysql-0.3.5" release: "kissable-bunny" heritage: "Tiller" spec: type: ClusterIP ports: - name: mysql port: 3306 targetPort: mysql selector: app: kissable-bunny-mysql --- # Source: mysql/templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kissable-bunny-mysql labels: app: kissable-bunny-mysql chart: "mysql-0.3.5" release: "kissable-bunny" heritage: "Tiller" spec: template: metadata: labels: app: kissable-bunny-mysql spec: initContainers: - name: "remove-lost-found" image: "busybox:1.25.0" imagePullPolicy: "IfNotPresent" command: ["rm", "-fr", "/var/lib/mysql/lost+found"] volumeMounts: - name: data mountPath: /var/lib/mysql containers: - name: kissable-bunny-mysql image: "mysql:5.7.14" imagePullPolicy: "IfNotPresent" resources: requests: cpu: 100m memory: 256Mi env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: kissable-bunny-mysql key: mysql-root-password - name: MYSQL_PASSWORD valueFrom: secretKeyRef: name: kissable-bunny-mysql key: mysql-password - name: MYSQL_USER value: "" - name: MYSQL_DATABASE value: "" ports: - name: mysql containerPort: 3306 livenessProbe: exec: command: - sh - -c - "mysqladmin ping -u root -p${MYSQL_ROOT_PASSWORD}" initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 3 readinessProbe: exec: command: - sh - -c - "mysqladmin ping -u root -p${MYSQL_ROOT_PASSWORD}" initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 1 successThreshold: 1 failureThreshold: 3 volumeMounts: - name: data mountPath: /var/lib/mysql volumes: - name: data persistentVolumeClaim: claimName: kissable-bunny-mysql LAST DEPLOYED: Thu Oct 25 20:20:23 2018 NAMESPACE: kube-public STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE kissable-bunny-mysql-c7df69d65-lmjzn 0/1 Pending 0 0s ==> v1/Secret NAME AGE kissable-bunny-mysql 1s ==> v1/PersistentVolumeClaim kissable-bunny-mysql 1s ==> v1/Service kissable-bunny-mysql 1s ==> v1beta1/Deployment kissable-bunny-mysql 1s NOTES: MySQL can be accessed via port 3306 on the following DNS name from within your cluster: kissable-bunny-mysql.kube-public.svc.cluster.local To get your root password run: MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace kube-public kissable-bunny-mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) To connect to your database: 1. Run an Ubuntu pod that you can use as a client: kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il 2. Install the mysql client: $ apt-get update && apt-get install mysql-client -y 3. Connect using the mysql cli, then provide your password: $ mysql -h kissable-bunny-mysql -p To connect to your database directly from outside the K8s cluster: MYSQL_HOST=127.0.0.1 MYSQL_PORT=3306 # Execute the following commands to route the connection: export POD_NAME=$(kubectl get pods --namespace kube-public -l "app=kissable-bunny-mysql" -o jsonpath="{.items[0].metadata.name}") kubectl port-forward $POD_NAME 3306:3306 mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}
[ ] 詳細的打包實驗。
# 創建一個新的 chart helm create hello-chart # validate chart helm lint # 打包 chart 到 tgz helm package hello-chart
到此,相信大家對“helm的部署和簡單使用”有了更深的了解,不妨來實際操作一番吧!這里是億速云網站,更多相關內容可以進入相關頻道進行查詢,關注我們,繼續學習!
免責聲明:本站發布的內容(圖片、視頻和文字)以原創、轉載和分享為主,文章觀點不代表本網站立場,如果涉及侵權請聯系站長郵箱:is@yisu.com進行舉報,并提供相關證據,一經查實,將立刻刪除涉嫌侵權內容。