k8s部署高可用harbor

Posted by     "李森" on Monday, October 8, 2018

TOC

前言

harbor 是目前最流行的开源镜像仓库,不过之前部署都是基于Docker Compose,基本无高可用可言,上不了生产环境。 不过harbor-helm出来了,部署 在k8s上,结合持久化存储,之前的可靠性以及性能要求基本解决。

helm 安装

helm 这里不过多赘述,安装可参考kubeasz(项目地址 helm文档地址)

nginx-ingress

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml -O nginx-ingress.yaml

sed -i 's#k8s.gcr.io#registry.cn-shenzhen.aliyuncs.com/k8s-kubeadm#g' nginx-ingress.yaml

kubectl apply -f nginx-ingress.yaml


kubectl get pod --all-namespaces|grep nginx

存储要求

我这里使用的ceph rbd,你也可以使用其他存储,ceph rbd 环境部署参考我之前的blog

开始部署

helm 资源

wget https://github.com/goharbor/harbor-helm/archive/1.0.0.zip
unzip 1.0.0.zip
cd harbor-helm-1.0.0

#### values.yaml配置
```bash
[root@app001 harbor-helm-1.0.0]# cat values.yaml
expose:
  # Set the way how to expose the service. Set the type as "ingress",
  # "clusterIP" or "nodePort" and fill the information in the corresponding
  # section
  type: ingress
  tls:
    # Enable the tls or not. Note: if the type is "ingress" and the tls
    # is disabled, the port must be included in the command when pull/push
    # images. Refer to https://github.com/goharbor/harbor/issues/5291
    # for the detail.
    enabled: true
    # Fill the name of secret if you want to use your own TLS certificate
    # and private key. The secret must contain keys named tls.crt and
    # tls.key that contain the certificate and private key to use for TLS
    # The certificate and private key will be generated automatically if
    # it is not set
    secretName: ""
    # By default, the Notary service will use the same cert and key as
    # described above. Fill the name of secret if you want to use a
    # separated one. Only needed when the type is "ingress".
    notarySecretName: ""
    # The commmon name used to generate the certificate, it's necessary
    # when the type is "clusterIP" or "nodePort" and "secretName" is null
    commonName: ""
  ingress:
    hosts:
      core: harbor-test.xxx.com
      notary: notary-test.xxx.com
    annotations:
      ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      ingress.kubernetes.io/proxy-body-size: "0"
      nginx.ingress.kubernetes.io/proxy-body-size: "0"
  clusterIP:
    # The name of ClusterIP service
    name: harbor
    ports:
      # The service port Harbor listens on when serving with HTTP
      httpPort: 80
      # The service port Harbor listens on when serving with HTTPS
      httpsPort: 443
      # The service port Notary listens on. Only needed when notary.enabled
      # is set to true
      notaryPort: 4443
  nodePort:
    # The name of NodePort service
    name: harbor
    ports:
      http:
        # The service port Harbor listens on when serving with HTTP
        port: 80
        # The node port Harbor listens on when serving with HTTP
        nodePort: 30012
      https:
        # The service port Harbor listens on when serving with HTTPS
        port: 443
        # The node port Harbor listens on when serving with HTTPS
        nodePort: 30013
      # Only needed when notary.enabled is set to true
      notary:
        # The service port Notary listens on
        port: 4443
        # The node port Notary listens on
        nodePort: 30014

# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker/notary client
#
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node
#
# If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: https://harbor-test.xxx.com

# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamicly.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you have already existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
  enabled: true
  # Setting it to "keep" to avoid removing PVCs during a helm delete
  # operation. Leaving it empty will delete PVCs after the chart deleted
  resourcePolicy: "keep"
  persistentVolumeClaim:
    registry:
      # Use the existing PVC which must be created manually before bound
      existingClaim: ""
      # Specify the "storageClass" used to provision the volume. Or the default
      # StorageClass will be used(the default).
      # Set it to "-" to disable dynamic provisioning
      storageClass: "ceph-storageclass"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 50Gi
    chartmuseum:
      existingClaim: ""
      storageClass: "ceph-storageclass"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 5Gi
    jobservice:
      existingClaim: ""
      storageClass: "ceph-storageclass"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 2Gi
    # If external database is used, the following settings for database will
    # be ignored
    database:
      existingClaim: ""
      storageClass: "ceph-storageclass"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 2Gi
    # If external Redis is used, the following settings for Redis will
    # be ignored
    redis:
      existingClaim: ""
      storageClass: "ceph-storageclass"
      subPath: ""
      accessMode: ReadWriteOnce
      size: 2Gi
  # Define which storage backend is used for registry and chartmuseum to store
  # images and charts. Refer to
  # https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
  # for the detail.
  imageChartStorage:
    # Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
    # "oss" and fill the information needed in the corresponding section. The type
    # must be "filesystem" if you want to use persistent volumes for registry
    # and chartmuseum
    type: filesystem
    filesystem:
      rootdirectory: /storage
      #maxthreads: 100
    azure:
      accountname: accountname
      accountkey: base64encodedaccountkey
      container: containername
      #realm: core.windows.net
    gcs:
      bucket: bucketname
      # TODO: support the keyfile of gcs
      #keyfile: /path/to/keyfile
      #rootdirectory: /gcs/object/name/prefix
      #chunksize: "5242880"
    s3:
      region: us-west-1
      bucket: bucketname
      #accesskey: awsaccesskey
      #secretkey: awssecretkey
      #regionendpoint: http://myobjects.local
      #encrypt: false
      #keyid: mykeyid
      #secure: true
      #v4auth: true
      #chunksize: "5242880"
      #rootdirectory: /s3/object/name/prefix
      #storageclass: STANDARD
    swift:
      authurl: https://storage.myprovider.com/v3/auth
      username: username
      password: password
      container: containername
      #region: fr
      #tenant: tenantname
      #tenantid: tenantid
      #domain: domainname
      #domainid: domainid
      #trustid: trustid
      #insecureskipverify: false
      #chunksize: 5M
      #prefix:
      #secretkey: secretkey
      #accesskey: accesskey
      #authversion: 3
      #endpointtype: public
      #tempurlcontainerkey: false
      #tempurlmethods:
    oss:
      accesskeyid: accesskeyid
      accesskeysecret: accesskeysecret
      region: regionname
      bucket: bucketname
      #endpoint: endpoint
      #internal: false
      #encrypt: false
      #secure: true
      #chunksize: 10M
      #rootdirectory: rootdirectory

imagePullPolicy: IfNotPresent

logLevel: debug
# The initial password of Harbor admin. Change it from portal after launching Harbor
harborAdminPassword: "xxxx2018"
# The secret key used for encryption. Must be a string of 16 chars.
secretKey: "not-a-secure-key"

# If expose the service via "ingress", the Nginx will not be used
nginx:
  image:
    repository: goharbor/nginx-photon
    tag: v1.7.0
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

portal:
  image:
    repository: goharbor/harbor-portal
    tag: v1.7.0
  replicas: 1
# resources:
#  requests:
#    memory: 256Mi
#    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

core:
  image:
    repository: goharbor/harbor-core
    tag: v1.7.0
  replicas: 1
# resources:
#  requests:
#    memory: 256Mi
#    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

adminserver:
  image:
    repository: goharbor/harbor-adminserver
    tag: v1.7.0
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

jobservice:
  image:
    repository: goharbor/harbor-jobservice
    tag: v1.7.0
  replicas: 1
  maxJobWorkers: 10
  # The logger for jobs: "file", "database" or "stdout"
  jobLogger: file
# resources:
#   requests:
#     memory: 256Mi
#     cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

registry:
  registry:
    image:
      repository: goharbor/registry-photon
      tag: v2.6.2-v1.7.0
  controller:
    image:
      repository: goharbor/harbor-registryctl
      tag: v1.7.0
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

chartmuseum:
  enabled: true
  image:
    repository: goharbor/chartmuseum-photon
    tag: v0.7.1-v1.7.0
  replicas: 1
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

clair:
  enabled: true
  image:
    repository: goharbor/clair-photon
    tag: v2.0.7-v1.7.0
  replicas: 1
  # The http(s) proxy used to update vulnerabilities database from internet
  httpProxy:
  httpsProxy:
  # The interval of clair updaters, the unit is hour, set to 0 to
  # disable the updaters
  updatersInterval: 12
  # resources:
  #  requests:
  #    memory: 256Mi
  #    cpu: 100m
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

notary:
  enabled: true
  server:
    image:
      repository: goharbor/notary-server-photon
      tag: v0.6.1-v1.7.0
    replicas: 1
  signer:
    image:
      repository: goharbor/notary-signer-photon
      tag: v0.6.1-v1.7.0
    replicas: 1
  nodeSelector: {}
  tolerations: []
  affinity: {}
  ## Additional deployment annotations
  podAnnotations: {}

database:
  # if external database is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: internal
  internal:
    image:
      repository: goharbor/harbor-db
      tag: v1.7.0
    # The initial superuser password for internal database
    password: "changeit"
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "192.168.0.1"
    port: "5432"
    username: "user"
    password: "password"
    coreDatabase: "registry"
    clairDatabase: "clair"
    notaryServerDatabase: "notary_server"
    notarySignerDatabase: "notary_signer"
    sslmode: "disable"
  ## Additional deployment annotations
  podAnnotations: {}

redis:
  # if external Redis is used, set "type" to "external"
  # and fill the connection informations in "external" section
  type: internal
  internal:
    image:
      repository: goharbor/redis-photon
      tag: v1.7.0
    # resources:
    #  requests:
    #    memory: 256Mi
    #    cpu: 100m
    nodeSelector: {}
    tolerations: []
    affinity: {}
  external:
    host: "192.168.0.2"
    port: "6379"
    # The "coreDatabaseIndex" must be "0" as the library Harbor
    # used doesn't support configuring it
    coreDatabaseIndex: "0"
    jobserviceDatabaseIndex: "1"
    registryDatabaseIndex: "2"
    chartmuseumDatabaseIndex: "3"
    password: ""
  ## Additional deployment annotations
  podAnnotations: {}
[root@app001 harbor-helm-1.0.0]#

需要修改的:

  • ingress 域名, harbor-test.xxx.com notary-test.xxx.com做好解析
  • 持久化存储storageclass
  • 根据实际情况调整各组件存储大小
  • 最重要的更改harbor密码
  • 我这里数据库跟redis都用了内部的,有条件可以用外部高可用集群

部署


helms install . --debug --name my-harbor

[root@app001 harbor-helm-1.0.0]# kubectl get pod
NAME                                              READY   STATUS    RESTARTS   AGE
my-harbor-harbor-adminserver-576588db97-jhkl4     1/1     Running   0          6m30s
my-harbor-harbor-chartmuseum-c85854bb9-ktjst      1/1     Running   0          6m30s
my-harbor-harbor-clair-5db94678d7-s6mvj           1/1     Running   0          6m30s
my-harbor-harbor-core-8586b7759b-qf92d            1/1     Running   4          6m30s
my-harbor-harbor-database-0                       1/1     Running   0          6m30s
my-harbor-harbor-jobservice-844ffdcff-lf99v       1/1     Running   1          6m30s
my-harbor-harbor-notary-server-989c6f99d-7979w    1/1     Running   0          6m30s
my-harbor-harbor-notary-signer-6bd6f94975-6zv8m   1/1     Running   0          6m30s
my-harbor-harbor-portal-dd4cdc898-9h4jx           1/1     Running   0          6m30s
my-harbor-harbor-redis-0                          1/1     Running   0          6m30s
my-harbor-harbor-registry-9789b9775-cklsh         2/2     Running   0          6m30s

# 如果部署失败,需要重新部署执行清理操作
helms delete --purge my-harbor

我这里是采用安全的helm 做了个helms 别名 : alias helms=‘helm –tls –tiller-namespace kube-system’

客户端

获取证书

kubectl get secrets/my-harbor-harbor-ingress -o jsonpath="{.data.ca\.crt}" | base64 --decode > /etc/kubernetes/ssl/harbor.crt

分发配置

# Docker Client 加入
ansible kube-cluster,deploy -m file  -a 'path=/etc/docker/certs.d/harbor.xxx.com state=directory'

ansible kube-cluster,deploy -m copy -a 'src=/etc/kubernetes/ssl/harbor.crt dest=/etc/docker/certs.d/harbor.xxx.com'

# 加harbor的hosts解析
172.16.68.15 harbor.xxx.com
ansible kube-cluster -m copy -a 'src=/etc/hosts dest=/etc/hosts'

登录测试

docker login harbor.xxx.com
# 输入 admin 以及values.yaml设置的密码

至此helm 部署高可用 harbor 完成!

升级harbor:

默认是会保存pvc的 helm 删除并不会删除数据,升级新版本 持久化存储 指定 已存在pvc即可

升级或操作harbor 最大的一个坑就是 nfs挂载会 夯住 造成 postgres无法启动,需要重启nfs-server!