Disable Host Verification for Hadoop












0















I have set up Hadoop to run as a linux service in an AWS EC2 instance (deployed via an auto-scaling group) and have it run as a spark user. However, when I start it, I get the following message in the systemctl status of the Hadoop service complaining that it can't verify against the master (I'm using Consul for auto-discovery of master and workers):



Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: Stopping namenodes on [spark-master.service.consul]
Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-master.service.consul: Host key verification failed.
Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-worker.service.consul: Host key verification failed.
Jan 11 23:40:24 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: Stopping secondary namenodes [0.0.0.0]
Jan 11 23:40:24 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: 0.0.0.0: Host key verification failed.
Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: stopping yarn daemons
Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: no resourcemanager to stop
Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-worker.service.consul: Host key verification failed.
Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: no proxyserver to stop
Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal systemd[1]: Started Hadoop.


I have tried adding a config file into my spark .ssh directory to ignore host verification:



Host *.service.consul
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null


Which I know works because if I run /opt/hadoop/sbin/start-all.sh as the spark user, it authorizes immediately:



spark-master.service.consul: Warning: Permanently added 'spark-master.service.consul,172.21.3.106' (ECDSA) to the list of known hosts.
spark-master.service.consul: starting namenode, logging to /var/log/hadoop/hadoop-spark-namenode-ip-172-21-3-106.us-west-2.compute.internal.out
spark-worker.service.consul: Warning: Permanently added 'spark-worker.service.consul,172.21.1.19' (ECDSA) to the list of known hosts.
spark-worker.service.consul: datanode running as process 7173. Stop it first.
Starting secondary namenodes [0.0.0.0]
0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
0.0.0.0: starting secondarynamenode, logging to /var/log/hadoop/hadoop-spark-secondarynamenode-ip-172-21-1-19.us-west-2.compute.internal.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop/logs/yarn-spark-resourcemanager-ip-172-21-1-19.us-west-2.compute.internal.out
spark-worker.service.consul: Warning: Permanently added 'spark-worker.service.consul,172.21.1.19' (ECDSA) to the list of known hosts.
spark-worker.service.consul: starting nodemanager, logging to /opt/hadoop/logs/yarn-spark-nodemanager-ip-172-21-1-19.us-west-2.compute.internal.out


And I know for a fact that when I start the service it is running as my spark user:



spark     13987      1  0 23:44 ?        00:00:00 bash /opt/hadoop/sbin/start-all.sh
spark 14000 13987 0 23:44 ? 00:00:00 bash /opt/hadoop/sbin/start-dfs.sh --config /opt/hadoop/etc/hadoop
spark 14074 14000 0 23:44 ? 00:00:00 bash /opt/hadoop/sbin/slaves.sh --config /opt/hadoop/etc/hadoop cd /opt/hadoop ; /opt/hadoop/sbin/hadoop-daemon.sh --config /opt/hadoop/etc/hadoop --script /opt/hadoop/sbin/hdfs start namenode
spark 14099 14074 0 23:44 ? 00:00:00 ssh spark-master.service.consul cd /opt/hadoop ; /opt/hadoop/sbin/hadoop-daemon.sh --config /opt/hadoop/etc/hadoop --script /opt/hadoop/sbin/hdfs start namenode


I need auto-verification for all instances deployed in my Auto-scaling group rather than me having to log into each and every one. So does anyone know how to do this? Is there some setting in my Hadoop service that I'm missing?



This is the service:



[root@ip-172-21-1-19 ~]# cat /usr/lib/systemd/system/hadoop.service
[Unit]
Description=Hadoop
After=syslog.target network.target remote-fs.target nss-lookup.target network-online.target spark-worker.service
Requires=network-online.target spark-worker.service

[Service]
User=spark
Group=spark
Type=forking
PermissionsStartOnly=true
ExecStartPre=/usr/bin/install -o spark -g spark -d /var/run/hadoop
ExecStart=/opt/hadoop/sbin/start-all.sh
ExecStop=/opt/hadoop/sbin/stop-all.sh
WorkingDirectory=/opt/hadoop
TimeoutStartSec=2min
Restart=on-failure
SyslogIdentifier=hadoop
StandardOutput=journal
StandardError=journal
LimitNOFILE=infinity
LimitMEMLOCK=infinity
LimitNPROC=infinity
LimitAS=infinity
SuccessExitStatus=143
RestartSec=20

[Install]
WantedBy=multi-user.target


Please let me know. Thanks.










share|improve this question



























    0















    I have set up Hadoop to run as a linux service in an AWS EC2 instance (deployed via an auto-scaling group) and have it run as a spark user. However, when I start it, I get the following message in the systemctl status of the Hadoop service complaining that it can't verify against the master (I'm using Consul for auto-discovery of master and workers):



    Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: Stopping namenodes on [spark-master.service.consul]
    Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-master.service.consul: Host key verification failed.
    Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-worker.service.consul: Host key verification failed.
    Jan 11 23:40:24 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: Stopping secondary namenodes [0.0.0.0]
    Jan 11 23:40:24 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: 0.0.0.0: Host key verification failed.
    Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: stopping yarn daemons
    Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: no resourcemanager to stop
    Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-worker.service.consul: Host key verification failed.
    Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: no proxyserver to stop
    Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal systemd[1]: Started Hadoop.


    I have tried adding a config file into my spark .ssh directory to ignore host verification:



    Host *.service.consul
    StrictHostKeyChecking=no
    UserKnownHostsFile=/dev/null


    Which I know works because if I run /opt/hadoop/sbin/start-all.sh as the spark user, it authorizes immediately:



    spark-master.service.consul: Warning: Permanently added 'spark-master.service.consul,172.21.3.106' (ECDSA) to the list of known hosts.
    spark-master.service.consul: starting namenode, logging to /var/log/hadoop/hadoop-spark-namenode-ip-172-21-3-106.us-west-2.compute.internal.out
    spark-worker.service.consul: Warning: Permanently added 'spark-worker.service.consul,172.21.1.19' (ECDSA) to the list of known hosts.
    spark-worker.service.consul: datanode running as process 7173. Stop it first.
    Starting secondary namenodes [0.0.0.0]
    0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
    0.0.0.0: starting secondarynamenode, logging to /var/log/hadoop/hadoop-spark-secondarynamenode-ip-172-21-1-19.us-west-2.compute.internal.out
    starting yarn daemons
    starting resourcemanager, logging to /opt/hadoop/logs/yarn-spark-resourcemanager-ip-172-21-1-19.us-west-2.compute.internal.out
    spark-worker.service.consul: Warning: Permanently added 'spark-worker.service.consul,172.21.1.19' (ECDSA) to the list of known hosts.
    spark-worker.service.consul: starting nodemanager, logging to /opt/hadoop/logs/yarn-spark-nodemanager-ip-172-21-1-19.us-west-2.compute.internal.out


    And I know for a fact that when I start the service it is running as my spark user:



    spark     13987      1  0 23:44 ?        00:00:00 bash /opt/hadoop/sbin/start-all.sh
    spark 14000 13987 0 23:44 ? 00:00:00 bash /opt/hadoop/sbin/start-dfs.sh --config /opt/hadoop/etc/hadoop
    spark 14074 14000 0 23:44 ? 00:00:00 bash /opt/hadoop/sbin/slaves.sh --config /opt/hadoop/etc/hadoop cd /opt/hadoop ; /opt/hadoop/sbin/hadoop-daemon.sh --config /opt/hadoop/etc/hadoop --script /opt/hadoop/sbin/hdfs start namenode
    spark 14099 14074 0 23:44 ? 00:00:00 ssh spark-master.service.consul cd /opt/hadoop ; /opt/hadoop/sbin/hadoop-daemon.sh --config /opt/hadoop/etc/hadoop --script /opt/hadoop/sbin/hdfs start namenode


    I need auto-verification for all instances deployed in my Auto-scaling group rather than me having to log into each and every one. So does anyone know how to do this? Is there some setting in my Hadoop service that I'm missing?



    This is the service:



    [root@ip-172-21-1-19 ~]# cat /usr/lib/systemd/system/hadoop.service
    [Unit]
    Description=Hadoop
    After=syslog.target network.target remote-fs.target nss-lookup.target network-online.target spark-worker.service
    Requires=network-online.target spark-worker.service

    [Service]
    User=spark
    Group=spark
    Type=forking
    PermissionsStartOnly=true
    ExecStartPre=/usr/bin/install -o spark -g spark -d /var/run/hadoop
    ExecStart=/opt/hadoop/sbin/start-all.sh
    ExecStop=/opt/hadoop/sbin/stop-all.sh
    WorkingDirectory=/opt/hadoop
    TimeoutStartSec=2min
    Restart=on-failure
    SyslogIdentifier=hadoop
    StandardOutput=journal
    StandardError=journal
    LimitNOFILE=infinity
    LimitMEMLOCK=infinity
    LimitNPROC=infinity
    LimitAS=infinity
    SuccessExitStatus=143
    RestartSec=20

    [Install]
    WantedBy=multi-user.target


    Please let me know. Thanks.










    share|improve this question

























      0












      0








      0








      I have set up Hadoop to run as a linux service in an AWS EC2 instance (deployed via an auto-scaling group) and have it run as a spark user. However, when I start it, I get the following message in the systemctl status of the Hadoop service complaining that it can't verify against the master (I'm using Consul for auto-discovery of master and workers):



      Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: Stopping namenodes on [spark-master.service.consul]
      Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-master.service.consul: Host key verification failed.
      Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-worker.service.consul: Host key verification failed.
      Jan 11 23:40:24 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: Stopping secondary namenodes [0.0.0.0]
      Jan 11 23:40:24 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: 0.0.0.0: Host key verification failed.
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: stopping yarn daemons
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: no resourcemanager to stop
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-worker.service.consul: Host key verification failed.
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: no proxyserver to stop
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal systemd[1]: Started Hadoop.


      I have tried adding a config file into my spark .ssh directory to ignore host verification:



      Host *.service.consul
      StrictHostKeyChecking=no
      UserKnownHostsFile=/dev/null


      Which I know works because if I run /opt/hadoop/sbin/start-all.sh as the spark user, it authorizes immediately:



      spark-master.service.consul: Warning: Permanently added 'spark-master.service.consul,172.21.3.106' (ECDSA) to the list of known hosts.
      spark-master.service.consul: starting namenode, logging to /var/log/hadoop/hadoop-spark-namenode-ip-172-21-3-106.us-west-2.compute.internal.out
      spark-worker.service.consul: Warning: Permanently added 'spark-worker.service.consul,172.21.1.19' (ECDSA) to the list of known hosts.
      spark-worker.service.consul: datanode running as process 7173. Stop it first.
      Starting secondary namenodes [0.0.0.0]
      0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
      0.0.0.0: starting secondarynamenode, logging to /var/log/hadoop/hadoop-spark-secondarynamenode-ip-172-21-1-19.us-west-2.compute.internal.out
      starting yarn daemons
      starting resourcemanager, logging to /opt/hadoop/logs/yarn-spark-resourcemanager-ip-172-21-1-19.us-west-2.compute.internal.out
      spark-worker.service.consul: Warning: Permanently added 'spark-worker.service.consul,172.21.1.19' (ECDSA) to the list of known hosts.
      spark-worker.service.consul: starting nodemanager, logging to /opt/hadoop/logs/yarn-spark-nodemanager-ip-172-21-1-19.us-west-2.compute.internal.out


      And I know for a fact that when I start the service it is running as my spark user:



      spark     13987      1  0 23:44 ?        00:00:00 bash /opt/hadoop/sbin/start-all.sh
      spark 14000 13987 0 23:44 ? 00:00:00 bash /opt/hadoop/sbin/start-dfs.sh --config /opt/hadoop/etc/hadoop
      spark 14074 14000 0 23:44 ? 00:00:00 bash /opt/hadoop/sbin/slaves.sh --config /opt/hadoop/etc/hadoop cd /opt/hadoop ; /opt/hadoop/sbin/hadoop-daemon.sh --config /opt/hadoop/etc/hadoop --script /opt/hadoop/sbin/hdfs start namenode
      spark 14099 14074 0 23:44 ? 00:00:00 ssh spark-master.service.consul cd /opt/hadoop ; /opt/hadoop/sbin/hadoop-daemon.sh --config /opt/hadoop/etc/hadoop --script /opt/hadoop/sbin/hdfs start namenode


      I need auto-verification for all instances deployed in my Auto-scaling group rather than me having to log into each and every one. So does anyone know how to do this? Is there some setting in my Hadoop service that I'm missing?



      This is the service:



      [root@ip-172-21-1-19 ~]# cat /usr/lib/systemd/system/hadoop.service
      [Unit]
      Description=Hadoop
      After=syslog.target network.target remote-fs.target nss-lookup.target network-online.target spark-worker.service
      Requires=network-online.target spark-worker.service

      [Service]
      User=spark
      Group=spark
      Type=forking
      PermissionsStartOnly=true
      ExecStartPre=/usr/bin/install -o spark -g spark -d /var/run/hadoop
      ExecStart=/opt/hadoop/sbin/start-all.sh
      ExecStop=/opt/hadoop/sbin/stop-all.sh
      WorkingDirectory=/opt/hadoop
      TimeoutStartSec=2min
      Restart=on-failure
      SyslogIdentifier=hadoop
      StandardOutput=journal
      StandardError=journal
      LimitNOFILE=infinity
      LimitMEMLOCK=infinity
      LimitNPROC=infinity
      LimitAS=infinity
      SuccessExitStatus=143
      RestartSec=20

      [Install]
      WantedBy=multi-user.target


      Please let me know. Thanks.










      share|improve this question














      I have set up Hadoop to run as a linux service in an AWS EC2 instance (deployed via an auto-scaling group) and have it run as a spark user. However, when I start it, I get the following message in the systemctl status of the Hadoop service complaining that it can't verify against the master (I'm using Consul for auto-discovery of master and workers):



      Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: Stopping namenodes on [spark-master.service.consul]
      Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-master.service.consul: Host key verification failed.
      Jan 11 23:40:23 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-worker.service.consul: Host key verification failed.
      Jan 11 23:40:24 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: Stopping secondary namenodes [0.0.0.0]
      Jan 11 23:40:24 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: 0.0.0.0: Host key verification failed.
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: stopping yarn daemons
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: no resourcemanager to stop
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: spark-worker.service.consul: Host key verification failed.
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal hadoop[12095]: no proxyserver to stop
      Jan 11 23:40:25 ip-172-21-1-19.us-west-2.compute.internal systemd[1]: Started Hadoop.


      I have tried adding a config file into my spark .ssh directory to ignore host verification:



      Host *.service.consul
      StrictHostKeyChecking=no
      UserKnownHostsFile=/dev/null


      Which I know works because if I run /opt/hadoop/sbin/start-all.sh as the spark user, it authorizes immediately:



      spark-master.service.consul: Warning: Permanently added 'spark-master.service.consul,172.21.3.106' (ECDSA) to the list of known hosts.
      spark-master.service.consul: starting namenode, logging to /var/log/hadoop/hadoop-spark-namenode-ip-172-21-3-106.us-west-2.compute.internal.out
      spark-worker.service.consul: Warning: Permanently added 'spark-worker.service.consul,172.21.1.19' (ECDSA) to the list of known hosts.
      spark-worker.service.consul: datanode running as process 7173. Stop it first.
      Starting secondary namenodes [0.0.0.0]
      0.0.0.0: Warning: Permanently added '0.0.0.0' (ECDSA) to the list of known hosts.
      0.0.0.0: starting secondarynamenode, logging to /var/log/hadoop/hadoop-spark-secondarynamenode-ip-172-21-1-19.us-west-2.compute.internal.out
      starting yarn daemons
      starting resourcemanager, logging to /opt/hadoop/logs/yarn-spark-resourcemanager-ip-172-21-1-19.us-west-2.compute.internal.out
      spark-worker.service.consul: Warning: Permanently added 'spark-worker.service.consul,172.21.1.19' (ECDSA) to the list of known hosts.
      spark-worker.service.consul: starting nodemanager, logging to /opt/hadoop/logs/yarn-spark-nodemanager-ip-172-21-1-19.us-west-2.compute.internal.out


      And I know for a fact that when I start the service it is running as my spark user:



      spark     13987      1  0 23:44 ?        00:00:00 bash /opt/hadoop/sbin/start-all.sh
      spark 14000 13987 0 23:44 ? 00:00:00 bash /opt/hadoop/sbin/start-dfs.sh --config /opt/hadoop/etc/hadoop
      spark 14074 14000 0 23:44 ? 00:00:00 bash /opt/hadoop/sbin/slaves.sh --config /opt/hadoop/etc/hadoop cd /opt/hadoop ; /opt/hadoop/sbin/hadoop-daemon.sh --config /opt/hadoop/etc/hadoop --script /opt/hadoop/sbin/hdfs start namenode
      spark 14099 14074 0 23:44 ? 00:00:00 ssh spark-master.service.consul cd /opt/hadoop ; /opt/hadoop/sbin/hadoop-daemon.sh --config /opt/hadoop/etc/hadoop --script /opt/hadoop/sbin/hdfs start namenode


      I need auto-verification for all instances deployed in my Auto-scaling group rather than me having to log into each and every one. So does anyone know how to do this? Is there some setting in my Hadoop service that I'm missing?



      This is the service:



      [root@ip-172-21-1-19 ~]# cat /usr/lib/systemd/system/hadoop.service
      [Unit]
      Description=Hadoop
      After=syslog.target network.target remote-fs.target nss-lookup.target network-online.target spark-worker.service
      Requires=network-online.target spark-worker.service

      [Service]
      User=spark
      Group=spark
      Type=forking
      PermissionsStartOnly=true
      ExecStartPre=/usr/bin/install -o spark -g spark -d /var/run/hadoop
      ExecStart=/opt/hadoop/sbin/start-all.sh
      ExecStop=/opt/hadoop/sbin/stop-all.sh
      WorkingDirectory=/opt/hadoop
      TimeoutStartSec=2min
      Restart=on-failure
      SyslogIdentifier=hadoop
      StandardOutput=journal
      StandardError=journal
      LimitNOFILE=infinity
      LimitMEMLOCK=infinity
      LimitNPROC=infinity
      LimitAS=infinity
      SuccessExitStatus=143
      RestartSec=20

      [Install]
      WantedBy=multi-user.target


      Please let me know. Thanks.







      linux amazon-ec2 systemd hadoop consul






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Jan 12 at 0:05









      Ethan SteinEthan Stein

      11




      11






















          0






          active

          oldest

          votes











          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "3"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1393385%2fdisable-host-verification-for-hadoop%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Super User!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1393385%2fdisable-host-verification-for-hadoop%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          If I really need a card on my start hand, how many mulligans make sense? [duplicate]

          Alcedinidae

          Can an atomic nucleus contain both particles and antiparticles? [duplicate]