4.K8s项目实战


1. 服务部署到Kubernetes

1.1 部署wordpress+mysql

  1. 创建wordpress命名空间

    kubectl create namespace wordpress
  2. 创建 wordpress-db.yaml 文件:

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: mysql-deploy
      namespace: wordpress
      labels:
        app: mysql
    spec:
      template:
        metadata:
          labels:
            app: mysql
        spec:
          containers:
          - name: mysql
            image: mysql:5.6  
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 3306
              name: dbport
            env:
            - name: MYSQL_ROOT_PASSWORD
              value: rootPassW0rd
            - name: MYSQL_DATABASE
              value: wordpress
            - name: MYSQL_USER
              value: wordpress
            - name: MYSQL_PASSWORD
              value: wordpress
            volumeMounts:
            - name: db
              mountPath: /var/lib/mysql
          volumes:
          - name: db
            hostPath:
              path: /var/lib/mysql
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
      namespace: wordpress
    spec:
      selector:
        app: mysql
      ports:
      - name: mysqlport
        protocol: TCP
        port: 3306
        targetPort: dbport
  3. 根据wordpress-db.yaml创建资源[mysql数据库]

    kubectl apply -f wordpress-db.yaml
    kubectl get pods -n wordpress      # 记得获取ip,因为wordpress.yaml文件中要修改
    kubectl get svc mysql -n wordpress
    kubectl describe svc mysql -n wordpress

    kubectl describe pod mysql-deploy-78cd6964bd-pnstt -n wordpress

  4. 创建 wordpress.yaml 文件:
    需要修改yaml中mysql的host,可以修改为pod的host或者service的host,这里使用pos的host

    apiVersion: apps/v1beta1
    kind: Deployment
    metadata:
      name: wordpress-deploy
      namespace: wordpress
      labels:
        app: wordpress
    spec:
      template:
        metadata:
          labels:
            app: wordpress
        spec:
          containers:
          - name: wordpress
            image: wordpress
            imagePullPolicy: IfNotPresent
            ports:
            - containerPort: 80
              name: wdport
            env:
            - name: WORDPRESS_DB_HOST
              value: 192.168.157.142:3306
            - name: WORDPRESS_DB_USER
              value: wordpress
            - name: WORDPRESS_DB_PASSWORD
              value: wordpress
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: wordpress
      namespace: wordpress
    spec:
      type: NodePort
      selector:
        app: wordpress
      ports:
      - name: wordpressport
        protocol: TCP
        port: 80
        targetPort: wdport
  5. 根据wordpress.yaml创建资源[wordpress]

    kubectl apply -f wordpress.yaml    #修改其中mysql的ip地址,其实也可以使用service的name:mysql
    kubectl get pods -n wordpress 
    kubectl get svc -n wordpress   # 获取到转发后的端口,如30063

  6. 访问测试

    kubectl get pods -n wordpress -o wide # 查看pod
    kubectl get svc -n wordpress -o wide # 查看service

    lsof -i tcp:31866
    netstat -nltp|grep 31866

  7. 浏览器访问 172.16.11.129:31866

    填写完成之后页面

    win上访问集群中任意宿主机节点的IP:30063

  8. 在集群中,mysql的地址不仅可以通过ip来配置,也可以通过serviceName来访问

1.2 部署Spring Boot项目

流程:确定服务–>编写Dockerfile制作镜像–>上传镜像到仓库–>编写K8S文件–>创建

网盘/Kubernetes实战走起/课堂源码/springboot-demo

  1. 准备Spring Boot项目springboot-demo

    @RestController
    public class K8SController {
        @RequestMapping("/k8s")
        public String k8s(){
            String result="";
            try {
                //用 getLocalHost() 方法创建的InetAddress的对象
                InetAddress address = InetAddress.getLocalHost();
                result="hostname: "+address.getHostName()+"hostaddress: "+address.getHostAddress();
                System.out.println();//主机名
                System.out.println();//主机别名
                System.out.println();//
            }catch(Exception e){
                e.printStackTrace();
            }
            return "hello K8s <br/> "+result;
        }
    }
  2. 生成xxx.jar,并且上传到springboot-demo目录

    mvn clean pakcage
  3. 编写Dockerfile文件

    mkdir springboot-demo
    cd springboot-demo
    vi Dockerfile
    FROM openjdk:8-jre-alpine
    COPY springboot-demo-0.0.1-SNAPSHOT.jar /springboot-demo.jar
    ENTRYPOINT ["java","-jar","/springboot-demo.jar"]
  4. 根据Dockerfile创建image

    docker build -t springboot-demo-image:v1.0 .
  5. 使用docker run创建container

    docker run -d --name s1 -p 8090:8080 springboot-demo-image:v1.0
  6. 访问测试

    docker inspect s1
    curl ip:8080/k8s

  7. 将镜像推送到镜像仓库

    # 登录阿里云镜像仓库
    docker login --username=itcrazy2016@163.com registry.cn-hangzhou.aliyuncs.com
    docker tag springboot-demo-image registry.cn-hangzhou.aliyuncs.com/itcrazy2016/springboot-demo-image:v1.0
    docker push registry.cn-hangzhou.aliyuncs.com/itcrazy2016/springboot-demo-image:v1.0

    这里是推送到 harbor 私服

  8. 编写Kubernetes配置文件

    vi springboot-demo.yaml
    kubectl apply -f springboot-demo.yaml
    # 以Deployment部署Pod
    apiVersion: apps/v1
    kind: Deployment
    metadata: 
      name: springboot-demo
    spec: 
      selector: 
        matchLabels: 
          app: springboot-demo
      replicas: 1
      template: 
        metadata:
          labels: 
            app: springboot-demo
        spec: 
          containers: 
          - name: springboot-demo
            image: 172.16.11.125/images/springboot-demo-image:v1.0
            ports: 
            - containerPort: 8080
    ---
    # 创建Pod的Service
    apiVersion: v1
    kind: Service
    metadata: 
      name: springboot-demo
    spec: 
      ports: 
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector: 
        app: springboot-demo
    ---
    # 创建Ingress,定义访问规则,一定要记得提前创建好nginx ingress controller
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata: 
      name: springboot-demo
    spec: 
      rules: 
      - host: k8s.demo.gper.club
        http: 
          paths: 
          - path: /
            backend: 
              serviceName: springboot-demo
              servicePort: 80
  9. 查看资源

    kubectl get pods
    kubectl get pods -o wide
    curl pod_id:8080/k8s
    kubectl get svc
    kubectl scale deploy springboot-demo --replicas=5

  10. win配置hosts文件[一定要记得提前创建好nginx ingress controller]

    # 192.168.0.61 springboot.jack.com
    172.16.11.129   k8s.demo.gper.club
  11. win浏览器访问

    # http://springboot.jack.com/k8s
    http://k8s.demo.gper.club/k8s

1.3 部署Nacos项目

1.3.1 传统方式

  1. 准备两个Spring Boot项目,名称为user和order,表示两个服务

    网盘/Kubernetes实战走起/课堂源码/user

    网盘/Kubernetes实战走起/课堂源码/order

    1. pom.xml 文件,两个项目类似

      <?xml version="1.0" encoding="UTF-8"?>
      <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
               xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
          <modelVersion>4.0.0</modelVersion>
          <parent>
              <groupId>org.springframework.boot</groupId>
              <artifactId>spring-boot-starter-parent</artifactId>
              <version>2.2.1.RELEASE</version>
              <relativePath/> <!-- lookup parent from repository -->
          </parent>
          <groupId>com.gupao</groupId>
          <artifactId>user</artifactId>
          <version>0.0.1-SNAPSHOT</version>
          <name>user</name>
          <description>Demo project for Spring Boot</description>
          <properties>
              <java.version>1.8</java.version>
          </properties>
          <dependencies>
              <dependency>
                  <groupId>org.springframework.boot</groupId>
                  <artifactId>spring-boot-starter-web</artifactId>
              </dependency>
              <dependency>
                  <groupId>org.springframework.boot</groupId>
                  <artifactId>spring-boot-starter-test</artifactId>
                  <scope>test</scope>
                  <exclusions>
                      <exclusion>
                          <groupId>org.junit.vintage</groupId>
                          <artifactId>junit-vintage-engine</artifactId>
                      </exclusion>
                  </exclusions>
              </dependency>
              <!--引入nacos client依赖-->
              <dependency>
                  <groupId>org.springframework.cloud</groupId>
                  <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
              </dependency>
          </dependencies>
          <dependencyManagement>
              <dependencies>
                  <!--加入Spring Cloud依赖-->
                  <dependency>
                      <groupId>org.springframework.cloud</groupId>
                      <artifactId>spring-cloud-dependencies</artifactId>
                      <version>Greenwich.SR1</version>
                      <type>pom</type>
                      <scope>import</scope>
                  </dependency>
                  <!--加入Spring Cloud Alibaba依赖-->
                  <dependency>
                      <groupId>org.springframework.cloud</groupId>
                      <artifactId>spring-cloud-alibaba-dependencies</artifactId>
                      <version>0.9.0.RELEASE</version>
                      <type>pom</type>
                      <scope>import</scope>
                  </dependency>
              </dependencies>
          </dependencyManagement>
          <build>
              <plugins>
                  <plugin>
                      <groupId>org.springframework.boot</groupId>
                      <artifactId>spring-boot-maven-plugin</artifactId>
                  </plugin>
              </plugins>
          </build>
      </project>
    2. application.yml 配置文件,user 服务使用8080端口,order 使用9090端口

      spring:
        cloud:
          nacos:
            discovery:
              server-addr: 172.16.11.125:8848
        application:
          name: user
      
      server:
        port: 8080
    3. UserApplication 和 OrderApplication,都是最基本的内容

      @SpringBootApplication
      public class OrderApplication {
      
          public static void main(String[] args) {
              SpringApplication.run(OrderApplication.class, args);
          }
      }
    4. TestController,User 服务通过 nacos 调用 order 的测试

      @RestController
      @RequestMapping("/user")
      public class TestController {
      
          @Autowired
          private DiscoveryClient discoveryClient;
      
          @RequestMapping("/test")
          public List<ServiceInstance> findServiceInstance() throws Exception{
              //查询指定服务名称下的所有实例的信息
              List<ServiceInstance> list=this.discoveryClient.getInstances("order");
              ServiceInstance serviceInstance=list.get(0);
              URI uri = serviceInstance.getUri();
              System.out.println(uri.toString());
              this.testUrl(uri.toString());
              return list;
          }
      
          public void testUrl(String urlString){
              URL url;
              try {
                  url = new URL(urlString);
                  URLConnection co =  url.openConnection();
                  co.connect();
                  System.out.println("连接可用");
              } catch (Exception e1) {
                  System.out.println("连接打不开!");
                  url = null;
              }
          }
      }
  2. 下载部署nacos server1.0.0

    githubhttps://github.com/alibaba/nacos/releases

    网盘/Kubernetes实战走起/课堂源码/nacos-server-1.0.0.tar.gz·

  3. 上传nacos-server-1.0.0.tar.gz到阿里云服务器 39:/usr/local/nacos

  4. 解压:tar -zxvf nacos-server-1.0.0.tar.gz·

  5. 进入到bin目录执行:sh startup.sh -m standalone [需要有java环境的支持]

  6. 浏览器访问:39.100.39.63:8848/nacos

  7. 用户名和密码:nacos

  8. 将应用注册到nacos,记得修改Spring Boot项目中application.yml文件

  9. 将user/order服务注册到nacos

  10. user服务能够找到order服务

  11. 启动两个Spring Boot项目,然后查看nacos server的服务列表

  12. 为了验证user能够发现order的地址

    访问localhost:8080/user/test,查看日志输出,从而测试是否可以ping通order地址

1.3.2 K8s方式

1.3.2.1 user和order是K8s中的Pod

思考:如果将user和order都迁移到K8s中,那服务注册与发现会有问题吗?

  1. 生成xxx.jar,并且分别上传到master节点的user和order目录

    resources/nacos/jar/xxx.jar

    mvn clean pakcage
  2. 来到对应的目录,编写Dockerfile文件

    vi Dockerfile

    FROM openjdk:8-jre-alpine
    COPY user-0.0.1-SNAPSHOT.jar /user.jar
    ENTRYPOINT ["java","-jar","/user.jar"]
    FROM openjdk:8-jre-alpine
    COPY order-0.0.1-SNAPSHOT.jar /order.jar
    ENTRYPOINT ["java","-jar","/order.jar"]
  3. 根据Dockerfile创建image

    docker build -t user-image:v1.0 .
    docker build -t order-image:v1.0 .
  4. 将镜像推送到镜像仓库

    # 登录阿里云镜像仓库
    docker login --username=itcrazy2016@163.com registry.cn-hangzhou.aliyuncs.com
    docker tag user-image:v1.0 registry.cn-hangzhou.aliyuncs.com/itcrazy2016/user-image:v1.0
    docker push registry.cn-hangzhou.aliyuncs.com/itcrazy2016/user-image:v1.0

  5. 编写Kubernetes配置文件

    vi user.yaml/order.yaml

    kubectl apply -f user.yaml/order.yaml
    user.yaml:

    # 以Deployment部署Pod
    apiVersion: apps/v1
    kind: Deployment
    metadata: 
      name: user
    spec: 
      selector: 
        matchLabels: 
          app: user
      replicas: 1
      template: 
        metadata:
          labels: 
            app: user
        spec: 
          containers: 
          - name: user
            image: 172.16.11.125/images/user-image:v1.0
            ports: 
            - containerPort: 8080
    ---
    # 创建Pod的Service
    apiVersion: v1
    kind: Service
    metadata: 
      name: user
    spec: 
      ports: 
      - port: 80
        protocol: TCP
        targetPort: 8080
      selector: 
        app: user
    ---
    # 创建Ingress,定义访问规则,一定要记得提前创建好nginx ingress controller
    apiVersion: extensions/v1beta1
    kind: Ingress
    metadata: 
      name: user
    spec: 
      rules: 
      - host: k8s.demo.gper.club
        http: 
          paths: 
          - path: /
            backend: 
              serviceName: user
              servicePort: 80

    order.yaml:

    # 以Deployment部署Pod
    apiVersion: apps/v1
    kind: Deployment
    metadata: 
      name: order
    spec: 
      selector: 
        matchLabels: 
          app: order
      replicas: 1
      template: 
        metadata:
          labels: 
            app: order
        spec: 
          containers: 
          - name: order
            image: 172.16.11.125/images/order-image:v1.0
            ports: 
            - containerPort: 9090
    ---
    # 创建Pod的Service
    apiVersion: v1
    kind: Service
    metadata: 
      name: order
    spec: 
      ports: 
      - port: 80
        protocol: TCP
        targetPort: 9090
      selector: 
        app: order

  6. 查看资源

    kubectl get pods
    kubectl get pods -o wide
    kubectl get svc
    kubectl get ingress

  7. 查看nacos server上的服务信息

    可以发现,注册到nacos server上的服务ip地址为pod的ip,比如192.168.80.206/192.168.190.82

  8. 访问测试

    # 01 集群内
    curl user-pod-ip:8080/user/test
    kubectl logs -f  -c    [主要是为了看日志输出,证明user能否访问order]
    # 02 集群外,比如win的浏览器,可以把集群中原来的ingress删除掉
    http://k8s.demo.gper.club/user/test

结论:如果服务都是在K8s集群中,最终将pod ip注册到了nacos server,那么最终服务通过pod ip发现so easy。

1.3.2.2 user传统和order迁移K8s

假如user现在不在K8s集群中,order在K8s集群中

比如user使用本地idea中的,order使用上面K8s中的

  1. 启动本地idea中的user服务

  2. 查看nacos server中的user服务列表

  3. 访问本地的localhost:8080/user/test,并且观察idea中的日志打印,发现访问的是order的pod id,此时肯定是不能进行服务调用的,怎么解决呢?

  4. 解决思路

    之所以访问不了,是因为order的 pod ip在外界访问不了,怎么解决呢?
    1. 可以将pod启动时所在的宿主机的ip写到容器中,也就是pod id和宿主机ip有一个对应关系
    2. pod和宿主机使用host网络模式,也就是pod直接用宿主机的ip,但是如果服务高可用会有端口冲突问题[可以使用pod的调度策略,尽可能在高可用的情况下,不会将pod调度在同一个worker中]
  5. 我们来演示一个host网络模式的方式,修改order.yaml文件

    修改之后apply之前可以看一下各个节点的9090端口是否被占用

    lsof -i tcp:9090

    ...
    metadata:
          labels: 
            app: order
        spec: 
        # 主要是加上这句话,注意在order.yaml的位置
          hostNetwork: true
          containers: 
            - name: order
            image: 172.16.11.125/images/order-image:v1.0
    ...
  6. kubectl apply -f order.yaml

    • kubectl get pods -o wide —>找到pod运行在哪个机器上,比如w2

    • 查看w2上的9090端口是否启动

  7. 查看nacos server上order服务

    可以发现此时用的是w2宿主机的9090端口

  8. 本地idea访问测试

    localhost:8080/user/test


文章作者: Soulballad
版权声明: 本博客所有文章除特別声明外,均采用 CC BY 4.0 许可协议。转载请注明来源 Soulballad !
评论
 上一篇
5.K8s的CICD 5.K8s的CICD
1 CICD 思考:如果springboot-demo需要修改某些代码,按照上述流程,是不是又要重新打包?然后写Dockerfile,push镜像,然后写k8s配置文件等等之类的操作 思路:如果能够按照上述图解一样,在本地进行开发,然
2021-03-09
下一篇 
3.K8s必知必会 3.K8s必知必会
1 Controllers 官网:https://kubernetes.io/docs/concepts/workloads/controllers/ ReplicationController(RC) 官网:https://kubern
2021-03-09
  目录