Kubernetes with Kops is it correct to have each master in its own instance group?











up vote
1
down vote

favorite












When I create a Kubernetes cluster and specify the --master-zone us-west-2a,us-west2b-us-west2c I end up with 3 masters (which is fine) but they are in different instance groups.



i.e.



$ kops get ig 
Using cluster from kubectl context: kube2.mydomain.net

NAME ROLE MACHINETYPE MIN MAX ZONES
master-us-west-2a Master m4.large 1 1 us-west-2a
master-us-west-2b Master m4.large 1 1 us-west-2b
master-us-west-2c Master m4.large 1 1 us-west-2c
nodes Node m4.large 3 3 us-west-2a,us-west-2b,us-west-2c


I'm not sure this is correct, or is this a best practice?



I would think that all the masters should be in one instance group.










share|improve this question




























    up vote
    1
    down vote

    favorite












    When I create a Kubernetes cluster and specify the --master-zone us-west-2a,us-west2b-us-west2c I end up with 3 masters (which is fine) but they are in different instance groups.



    i.e.



    $ kops get ig 
    Using cluster from kubectl context: kube2.mydomain.net

    NAME ROLE MACHINETYPE MIN MAX ZONES
    master-us-west-2a Master m4.large 1 1 us-west-2a
    master-us-west-2b Master m4.large 1 1 us-west-2b
    master-us-west-2c Master m4.large 1 1 us-west-2c
    nodes Node m4.large 3 3 us-west-2a,us-west-2b,us-west-2c


    I'm not sure this is correct, or is this a best practice?



    I would think that all the masters should be in one instance group.










    share|improve this question


























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      When I create a Kubernetes cluster and specify the --master-zone us-west-2a,us-west2b-us-west2c I end up with 3 masters (which is fine) but they are in different instance groups.



      i.e.



      $ kops get ig 
      Using cluster from kubectl context: kube2.mydomain.net

      NAME ROLE MACHINETYPE MIN MAX ZONES
      master-us-west-2a Master m4.large 1 1 us-west-2a
      master-us-west-2b Master m4.large 1 1 us-west-2b
      master-us-west-2c Master m4.large 1 1 us-west-2c
      nodes Node m4.large 3 3 us-west-2a,us-west-2b,us-west-2c


      I'm not sure this is correct, or is this a best practice?



      I would think that all the masters should be in one instance group.










      share|improve this question















      When I create a Kubernetes cluster and specify the --master-zone us-west-2a,us-west2b-us-west2c I end up with 3 masters (which is fine) but they are in different instance groups.



      i.e.



      $ kops get ig 
      Using cluster from kubectl context: kube2.mydomain.net

      NAME ROLE MACHINETYPE MIN MAX ZONES
      master-us-west-2a Master m4.large 1 1 us-west-2a
      master-us-west-2b Master m4.large 1 1 us-west-2b
      master-us-west-2c Master m4.large 1 1 us-west-2c
      nodes Node m4.large 3 3 us-west-2a,us-west-2b,us-west-2c


      I'm not sure this is correct, or is this a best practice?



      I would think that all the masters should be in one instance group.







      kubernetes kops






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 19 at 22:18









      Rico

      25.6k94864




      25.6k94864










      asked Nov 19 at 20:20









      Edgar Martinez

      4,833164876




      4,833164876
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          2
          down vote



          accepted










          I'm assuming you mean multiple availability zones. This is the default behavior for redundancy. Cloud providers like AWS recommend spreading your control plane (and your workloads for that matter) among different availability zones.



          If you want to create them in a single zone you can run something like this, for example:



          $ kops create cluster --zones=us-east-1c --master-count=3 k8s.example.com


          Or



          $ kops create cluster --zones=us-east-1b,us-east-1c --master-zones=us-east-1c --master-count=3


          More info here.



          I believe the rationale behind having an instance group (that map to ASGs in AWS) is that if you specify multiple availability zones in an ASG there are no guarantees that the nodes will land in a way that there is one on each availability zone.






          share|improve this answer























          • Well I do want them across 3 AZ's but I also want them to be part of the same instance group. Similar to regular nodes. Regular nodes are distributed across 3 AZ's but they are all part of the same instance group. --masters=3 makes three instance groups with 1 machine per instance group which seems wrong to me ¯_(ツ)_/¯
            – Edgar Martinez
            Nov 20 at 21:36












          • Are you talking about autoscaling groups?
            – Rico
            Nov 20 at 22:17










          • no, instance groups, i.e. kops get ig nodes <- that is the command to get the Instance Group. I want all the masters in the same instance group but on different AWS Availability Zones. Kops does this with the nodes by default, not sure whu it does not do this with the masters. Instead it creates 3 instance groups for the masters. It's odd how it does this.
            – Edgar Martinez
            Nov 21 at 14:06










          • Ohh I see, so instance groups map to autoscaling groups in AWS. What's the issue with having an ASG for each master. The issue is that if you specify multiple av zones in an ASG there are no guarantees that the nodes will land in a way that there's one on each av zone.
            – Rico
            Nov 21 at 15:27












          • that is what I concluded in the end and kept it as is. I am running kiam server on the masters and this issue arose from having to use nodeSelector across all the masters. I prob should not use the masters as kiam server hosts but that is how I got to this question.
            – Edgar Martinez
            Nov 21 at 16:25











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53382063%2fkubernetes-with-kops-is-it-correct-to-have-each-master-in-its-own-instance-group%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          2
          down vote



          accepted










          I'm assuming you mean multiple availability zones. This is the default behavior for redundancy. Cloud providers like AWS recommend spreading your control plane (and your workloads for that matter) among different availability zones.



          If you want to create them in a single zone you can run something like this, for example:



          $ kops create cluster --zones=us-east-1c --master-count=3 k8s.example.com


          Or



          $ kops create cluster --zones=us-east-1b,us-east-1c --master-zones=us-east-1c --master-count=3


          More info here.



          I believe the rationale behind having an instance group (that map to ASGs in AWS) is that if you specify multiple availability zones in an ASG there are no guarantees that the nodes will land in a way that there is one on each availability zone.






          share|improve this answer























          • Well I do want them across 3 AZ's but I also want them to be part of the same instance group. Similar to regular nodes. Regular nodes are distributed across 3 AZ's but they are all part of the same instance group. --masters=3 makes three instance groups with 1 machine per instance group which seems wrong to me ¯_(ツ)_/¯
            – Edgar Martinez
            Nov 20 at 21:36












          • Are you talking about autoscaling groups?
            – Rico
            Nov 20 at 22:17










          • no, instance groups, i.e. kops get ig nodes <- that is the command to get the Instance Group. I want all the masters in the same instance group but on different AWS Availability Zones. Kops does this with the nodes by default, not sure whu it does not do this with the masters. Instead it creates 3 instance groups for the masters. It's odd how it does this.
            – Edgar Martinez
            Nov 21 at 14:06










          • Ohh I see, so instance groups map to autoscaling groups in AWS. What's the issue with having an ASG for each master. The issue is that if you specify multiple av zones in an ASG there are no guarantees that the nodes will land in a way that there's one on each av zone.
            – Rico
            Nov 21 at 15:27












          • that is what I concluded in the end and kept it as is. I am running kiam server on the masters and this issue arose from having to use nodeSelector across all the masters. I prob should not use the masters as kiam server hosts but that is how I got to this question.
            – Edgar Martinez
            Nov 21 at 16:25















          up vote
          2
          down vote



          accepted










          I'm assuming you mean multiple availability zones. This is the default behavior for redundancy. Cloud providers like AWS recommend spreading your control plane (and your workloads for that matter) among different availability zones.



          If you want to create them in a single zone you can run something like this, for example:



          $ kops create cluster --zones=us-east-1c --master-count=3 k8s.example.com


          Or



          $ kops create cluster --zones=us-east-1b,us-east-1c --master-zones=us-east-1c --master-count=3


          More info here.



          I believe the rationale behind having an instance group (that map to ASGs in AWS) is that if you specify multiple availability zones in an ASG there are no guarantees that the nodes will land in a way that there is one on each availability zone.






          share|improve this answer























          • Well I do want them across 3 AZ's but I also want them to be part of the same instance group. Similar to regular nodes. Regular nodes are distributed across 3 AZ's but they are all part of the same instance group. --masters=3 makes three instance groups with 1 machine per instance group which seems wrong to me ¯_(ツ)_/¯
            – Edgar Martinez
            Nov 20 at 21:36












          • Are you talking about autoscaling groups?
            – Rico
            Nov 20 at 22:17










          • no, instance groups, i.e. kops get ig nodes <- that is the command to get the Instance Group. I want all the masters in the same instance group but on different AWS Availability Zones. Kops does this with the nodes by default, not sure whu it does not do this with the masters. Instead it creates 3 instance groups for the masters. It's odd how it does this.
            – Edgar Martinez
            Nov 21 at 14:06










          • Ohh I see, so instance groups map to autoscaling groups in AWS. What's the issue with having an ASG for each master. The issue is that if you specify multiple av zones in an ASG there are no guarantees that the nodes will land in a way that there's one on each av zone.
            – Rico
            Nov 21 at 15:27












          • that is what I concluded in the end and kept it as is. I am running kiam server on the masters and this issue arose from having to use nodeSelector across all the masters. I prob should not use the masters as kiam server hosts but that is how I got to this question.
            – Edgar Martinez
            Nov 21 at 16:25













          up vote
          2
          down vote



          accepted







          up vote
          2
          down vote



          accepted






          I'm assuming you mean multiple availability zones. This is the default behavior for redundancy. Cloud providers like AWS recommend spreading your control plane (and your workloads for that matter) among different availability zones.



          If you want to create them in a single zone you can run something like this, for example:



          $ kops create cluster --zones=us-east-1c --master-count=3 k8s.example.com


          Or



          $ kops create cluster --zones=us-east-1b,us-east-1c --master-zones=us-east-1c --master-count=3


          More info here.



          I believe the rationale behind having an instance group (that map to ASGs in AWS) is that if you specify multiple availability zones in an ASG there are no guarantees that the nodes will land in a way that there is one on each availability zone.






          share|improve this answer














          I'm assuming you mean multiple availability zones. This is the default behavior for redundancy. Cloud providers like AWS recommend spreading your control plane (and your workloads for that matter) among different availability zones.



          If you want to create them in a single zone you can run something like this, for example:



          $ kops create cluster --zones=us-east-1c --master-count=3 k8s.example.com


          Or



          $ kops create cluster --zones=us-east-1b,us-east-1c --master-zones=us-east-1c --master-count=3


          More info here.



          I believe the rationale behind having an instance group (that map to ASGs in AWS) is that if you specify multiple availability zones in an ASG there are no guarantees that the nodes will land in a way that there is one on each availability zone.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Nov 21 at 15:29

























          answered Nov 19 at 22:23









          Rico

          25.6k94864




          25.6k94864












          • Well I do want them across 3 AZ's but I also want them to be part of the same instance group. Similar to regular nodes. Regular nodes are distributed across 3 AZ's but they are all part of the same instance group. --masters=3 makes three instance groups with 1 machine per instance group which seems wrong to me ¯_(ツ)_/¯
            – Edgar Martinez
            Nov 20 at 21:36












          • Are you talking about autoscaling groups?
            – Rico
            Nov 20 at 22:17










          • no, instance groups, i.e. kops get ig nodes <- that is the command to get the Instance Group. I want all the masters in the same instance group but on different AWS Availability Zones. Kops does this with the nodes by default, not sure whu it does not do this with the masters. Instead it creates 3 instance groups for the masters. It's odd how it does this.
            – Edgar Martinez
            Nov 21 at 14:06










          • Ohh I see, so instance groups map to autoscaling groups in AWS. What's the issue with having an ASG for each master. The issue is that if you specify multiple av zones in an ASG there are no guarantees that the nodes will land in a way that there's one on each av zone.
            – Rico
            Nov 21 at 15:27












          • that is what I concluded in the end and kept it as is. I am running kiam server on the masters and this issue arose from having to use nodeSelector across all the masters. I prob should not use the masters as kiam server hosts but that is how I got to this question.
            – Edgar Martinez
            Nov 21 at 16:25


















          • Well I do want them across 3 AZ's but I also want them to be part of the same instance group. Similar to regular nodes. Regular nodes are distributed across 3 AZ's but they are all part of the same instance group. --masters=3 makes three instance groups with 1 machine per instance group which seems wrong to me ¯_(ツ)_/¯
            – Edgar Martinez
            Nov 20 at 21:36












          • Are you talking about autoscaling groups?
            – Rico
            Nov 20 at 22:17










          • no, instance groups, i.e. kops get ig nodes <- that is the command to get the Instance Group. I want all the masters in the same instance group but on different AWS Availability Zones. Kops does this with the nodes by default, not sure whu it does not do this with the masters. Instead it creates 3 instance groups for the masters. It's odd how it does this.
            – Edgar Martinez
            Nov 21 at 14:06










          • Ohh I see, so instance groups map to autoscaling groups in AWS. What's the issue with having an ASG for each master. The issue is that if you specify multiple av zones in an ASG there are no guarantees that the nodes will land in a way that there's one on each av zone.
            – Rico
            Nov 21 at 15:27












          • that is what I concluded in the end and kept it as is. I am running kiam server on the masters and this issue arose from having to use nodeSelector across all the masters. I prob should not use the masters as kiam server hosts but that is how I got to this question.
            – Edgar Martinez
            Nov 21 at 16:25
















          Well I do want them across 3 AZ's but I also want them to be part of the same instance group. Similar to regular nodes. Regular nodes are distributed across 3 AZ's but they are all part of the same instance group. --masters=3 makes three instance groups with 1 machine per instance group which seems wrong to me ¯_(ツ)_/¯
          – Edgar Martinez
          Nov 20 at 21:36






          Well I do want them across 3 AZ's but I also want them to be part of the same instance group. Similar to regular nodes. Regular nodes are distributed across 3 AZ's but they are all part of the same instance group. --masters=3 makes three instance groups with 1 machine per instance group which seems wrong to me ¯_(ツ)_/¯
          – Edgar Martinez
          Nov 20 at 21:36














          Are you talking about autoscaling groups?
          – Rico
          Nov 20 at 22:17




          Are you talking about autoscaling groups?
          – Rico
          Nov 20 at 22:17












          no, instance groups, i.e. kops get ig nodes <- that is the command to get the Instance Group. I want all the masters in the same instance group but on different AWS Availability Zones. Kops does this with the nodes by default, not sure whu it does not do this with the masters. Instead it creates 3 instance groups for the masters. It's odd how it does this.
          – Edgar Martinez
          Nov 21 at 14:06




          no, instance groups, i.e. kops get ig nodes <- that is the command to get the Instance Group. I want all the masters in the same instance group but on different AWS Availability Zones. Kops does this with the nodes by default, not sure whu it does not do this with the masters. Instead it creates 3 instance groups for the masters. It's odd how it does this.
          – Edgar Martinez
          Nov 21 at 14:06












          Ohh I see, so instance groups map to autoscaling groups in AWS. What's the issue with having an ASG for each master. The issue is that if you specify multiple av zones in an ASG there are no guarantees that the nodes will land in a way that there's one on each av zone.
          – Rico
          Nov 21 at 15:27






          Ohh I see, so instance groups map to autoscaling groups in AWS. What's the issue with having an ASG for each master. The issue is that if you specify multiple av zones in an ASG there are no guarantees that the nodes will land in a way that there's one on each av zone.
          – Rico
          Nov 21 at 15:27














          that is what I concluded in the end and kept it as is. I am running kiam server on the masters and this issue arose from having to use nodeSelector across all the masters. I prob should not use the masters as kiam server hosts but that is how I got to this question.
          – Edgar Martinez
          Nov 21 at 16:25




          that is what I concluded in the end and kept it as is. I am running kiam server on the masters and this issue arose from having to use nodeSelector across all the masters. I prob should not use the masters as kiam server hosts but that is how I got to this question.
          – Edgar Martinez
          Nov 21 at 16:25


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53382063%2fkubernetes-with-kops-is-it-correct-to-have-each-master-in-its-own-instance-group%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

          Alcedinidae

          RAC Tourist Trophy