[AWS] eksctl을 이용한 EKS 생성 (Managed NodeGroup with Launchtemplate + Managed AMI)
IT/AWS

[AWS] eksctl을 이용한 EKS 생성 (Managed NodeGroup with Launchtemplate + Managed AMI)

반응형

Topic

  • eksctl을 사용하여 EKS Cluster Nodegroup생성
  • VPC/Subnet은 기존에 생성되어있는 것을 사용
  • NodeGroup은 Launchtemplate을 이용
  • NodeGroup의 AMI는 CustomAMI가 아닌 _ Managed AMI_형태로 생성

아키텍처

작업리스트

  • eksctl 설치
  • AWS VPC/Subnet생성
  • EKS WorkerNode용 Launchtamplate 생성
  • eksctl을 이용하여 EKS cluster, nodegroup생성

사전준비

  • eksctl
  • AWS VPC/Subnet
  • EKS WorkerNode용 Launchtamplate

eksctl install on MAC OS

  • homebrew 설치 (설치 되어있다면 생략)
  • /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
  • weaveworks/tap 설정 (tap은 homebrew 소프트웨어 저장소라고 생각하면됨)
  • brew tap weaveworks/tap
  • eksctl install
  • brew install weaveworks/tap/eksctl
  • 설치 확인 및 버전확인
  • eksctl version

VPC/Subnet 생성

  • 위 아키텍처 그림과 같은형태로 VPC/Subnet을 생성하였습니다.

launchtemplate 생성

  • AWS Managed Console - Services - EC2 - Launch Template 을 선택
    1) Launch template name를 알맞게 입력
    2) AMI이미지는 선택하지 않음
    3) Instance Type/Keypair/Storage/Network정보 입력 ( Storage의 Device 정보는 /dev/xvda 으로 입력)
    4) Advanced details - UserData 에 아래 정보 입력
MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="==MYBOUNDARY==" 
--==MYBOUNDARY== Content-Type: text/x-shellscript; charset="us-ascii" 
#!/bin/bash 
echo "Running custom user data script" 
--==MYBOUNDARY==--\
  • 참고 캡처

eksctl config.yaml 작성

custom parameter : 아래 파라미터 값들은 cumstom하게 변경하여 적용

  • VPC ID : vpc-123
  • SUBNET ID : subnet-111,subnet-222,subnet-333,subnet-444,subnet-555,subnet-666,subnet-777,subnet-888
  • Launchtemplate ID : lt-111, lt-222, lt-333
  • config.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: EKSCTL-TEST
  region: ap-northeast-2

vpc:
  id: "vpc-123"
  # (optional, must match VPC ID used for each subnet below)

  #cidr: "10.144.0.0/16"
  # (optional, must match CIDR used by the given VPC)

  subnets:
    # must provide 'private' and/or 'public' subnets by availibility zone as shown

    ### public
    public:
      public-a:
        id: "subnet-111"
        #cidr: "10.144.10.0/24"
        # (optional, must match CIDR used by the given subnet)
      public-c:
        id: "subnet-222"
        #cidr: "10.144.20.0/24"
        # (optional, must match CIDR used by the given subnet)


    ### private
    private:
      private-frontend-a:
        id: "subnet-333"
        #cidr: "10.144.152.0/25"
        # (optional, must match CIDR used by the given subnet)

      private-frontend-c:
        id: "subnet-444"
        #cidr: "10.144.152.128/25"
        # (optional, must match CIDR used by the given subnet)

      private-backend-a:
        id: "subnet-555"
        #cidr: "10.144.152.128/25"
        # (optional, must match CIDR used by the given subnet)

      private-backend-c:
        id: "subnet-666"
        #cidr: "10.144.152.128/25"
        # (optional, must match CIDR used by the given subnet)

      private-manage-a:
        id: "subnet-777"
        #cidr: "10.144.152.128/25"
        # (optional, must match CIDR used by the given subnet)

      private-manage-c:
        id: "subnet-888"
        #cidr: "10.144.152.128/25"
        # (optional, must match CIDR used by the given subnet)

managedNodeGroups:
- name: FRONTEND
  launchTemplate:
    id: lt-111
    version: "3" #optional (uses the default version of the launch template if unspecified)
  labels: {nodegroup-type: FRONTEND }
  privateNetworking: true
  subnets:
    - private-frontend-a
    - private-frontend-c
  tags:
    nodegroup: FRONTEND
  iam:
    withAddonPolicies:
      externalDNS: true
      certManager: true

- name: BACKEND
  launchTemplate:
    id: lt-222
    version: "3" #optional (uses the default version of the launch template if unspecified)
  labels: {nodegroup-type: BACKEND }
  privateNetworking: true
  subnets:
    - private-backend-a
    - private-backend-c
  tags:
    nodegroup: BACKEND
  iam:
    withAddonPolicies:
      externalDNS: true
      certManager: true


- name: MANAGE
  launchTemplate:
    id: lt-333
    version: "3" #optional (uses the default version of the launch template if unspecified)
  labels: {nodegroup-type: MANAGE }
  privateNetworking: true
  subnets:
    - private-manage-a
    - private-manage-c
  tags:
    nodegroup: MANAGE
  iam:
    withAddonPolicies:
      externalDNS: true
      certManager: true

eksctl 명령어로 EKS cluster, Nodegroup생성

eksctl create cluster -f config.yaml

트러블슈팅

  1. eksctl의 nodegroup생성시 에러
AWS::EKS::Nodegroup/ManagedNodeGroup: CREATE_FAILED – "Nodegroup MANAGE failed to stabilize: [{Code: NodeCreationFailure,Message: Unhealthy nodes in the kubernetes cluster,ResourceIds: [i-111, i-222]}]"
waiting for CloudFormation stack "eksctl-EKSCTL-TEST-nodegroup-MANAGE": ResourceNotReady: failed waiting for successful resource state
  • EKS Cluster MasterNode와 WorkerNode가 통신이 안돼서 health check가 안돼서 발생
  • Launchtemplate에서 설정한 security group은 inbound를 all open한 뒤 Cluster가 생성이 완료가 되면 아래 규칙에 따라 security group을 설정함

EKS security group in/out bound 정책

반응형