AWS ElasticBeanstalk periodically goes down





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







0















I noticed lately that my laravel project in an AWS Elasticbeanstalk setup has been acting strangely. The server would go down in a few minutes. In a t3.small, it goes down in every 50 minutes. The health tab says that the memory is exhausted or something. It will go "Severe" for about 5-10 minutes then goes back without me doing anything. Basically just a whole zigzag in the monitoring. In the t3.nano it goes down at approximately every 5 minutes.



Here are some things that I've done that I suspect to be the cause




  • I've re-enabled pusher for broadcasting. The project has a pusher setup before and it was working fine. However, I disabled (removed all parts that uses it) as I don't need it yet. I re-enabled it and the problem occured

  • I've played with AWS WAF and Cloudfront. So I was studying those two parts before and played with some settings, however, I can't remember using any of them related to my EBS application. I did removed everything I've added on WAF and Cloudfront.


Here are some facts:




  • Whenever I remove the container commands creating the schedule:run and queue:work, it becomes completely fine. Totally "OK" status even
    if I simulate sending hundreds of requests per second.

  • I tried scaling it to 3 instances and the result is still the same, however, the downtime just becomes slower

  • It gives a 503 error code whenever it's down

  • The setup for EBS is PHP 7.2 running on 64bit Amazon Linux/2.8.4

  • I'm testing the job and queue by sending 1 Pusher message every minute. It doesn't do anything except send the current time. This is
    also the only cronjob running.

  • The cronjob works and I can also receive the Pusher messages, except during downtime


Here's an observation that I had with the logs
- There's an "internal dummy connection" related to Apache. The time when it is logged is identical to the time that the downtime occurs.



I've tried every hint on the logs, from juggling different settings on the cronjob and other possible causes. I've also tried asking my peers but no one has encountered such error before. In fact, they tested my cronjob and it's working properly for them.



I also have this error in the /var/log/httpd/error_log



[Fri Nov 23 19:07:35.208657 2018] [suexec:notice] [pid 3142] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
[Fri Nov 23 19:07:35.228633 2018] [http2:warn] [pid 3142] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
[Fri Nov 23 19:07:35.228644 2018] [http2:warn] [pid 3142] AH02951: mod_ssl does not seem to be enabled
[Fri Nov 23 19:07:35.229188 2018] [lbmethod_heartbeat:notice] [pid 3142] AH02282: No slotmem from mod_heartmonitor
[Fri Nov 23 19:07:35.267841 2018] [mpm_prefork:notice] [pid 3142] AH00163: Apache/2.4.34 (Amazon) configured -- resuming normal operations
[Fri Nov 23 19:07:35.267860 2018] [core:notice] [pid 3142] AH00094: Command line: '/usr/sbin/httpd -D FOR










share|improve this question





























    0















    I noticed lately that my laravel project in an AWS Elasticbeanstalk setup has been acting strangely. The server would go down in a few minutes. In a t3.small, it goes down in every 50 minutes. The health tab says that the memory is exhausted or something. It will go "Severe" for about 5-10 minutes then goes back without me doing anything. Basically just a whole zigzag in the monitoring. In the t3.nano it goes down at approximately every 5 minutes.



    Here are some things that I've done that I suspect to be the cause




    • I've re-enabled pusher for broadcasting. The project has a pusher setup before and it was working fine. However, I disabled (removed all parts that uses it) as I don't need it yet. I re-enabled it and the problem occured

    • I've played with AWS WAF and Cloudfront. So I was studying those two parts before and played with some settings, however, I can't remember using any of them related to my EBS application. I did removed everything I've added on WAF and Cloudfront.


    Here are some facts:




    • Whenever I remove the container commands creating the schedule:run and queue:work, it becomes completely fine. Totally "OK" status even
      if I simulate sending hundreds of requests per second.

    • I tried scaling it to 3 instances and the result is still the same, however, the downtime just becomes slower

    • It gives a 503 error code whenever it's down

    • The setup for EBS is PHP 7.2 running on 64bit Amazon Linux/2.8.4

    • I'm testing the job and queue by sending 1 Pusher message every minute. It doesn't do anything except send the current time. This is
      also the only cronjob running.

    • The cronjob works and I can also receive the Pusher messages, except during downtime


    Here's an observation that I had with the logs
    - There's an "internal dummy connection" related to Apache. The time when it is logged is identical to the time that the downtime occurs.



    I've tried every hint on the logs, from juggling different settings on the cronjob and other possible causes. I've also tried asking my peers but no one has encountered such error before. In fact, they tested my cronjob and it's working properly for them.



    I also have this error in the /var/log/httpd/error_log



    [Fri Nov 23 19:07:35.208657 2018] [suexec:notice] [pid 3142] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
    [Fri Nov 23 19:07:35.228633 2018] [http2:warn] [pid 3142] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
    [Fri Nov 23 19:07:35.228644 2018] [http2:warn] [pid 3142] AH02951: mod_ssl does not seem to be enabled
    [Fri Nov 23 19:07:35.229188 2018] [lbmethod_heartbeat:notice] [pid 3142] AH02282: No slotmem from mod_heartmonitor
    [Fri Nov 23 19:07:35.267841 2018] [mpm_prefork:notice] [pid 3142] AH00163: Apache/2.4.34 (Amazon) configured -- resuming normal operations
    [Fri Nov 23 19:07:35.267860 2018] [core:notice] [pid 3142] AH00094: Command line: '/usr/sbin/httpd -D FOR










    share|improve this question

























      0












      0








      0








      I noticed lately that my laravel project in an AWS Elasticbeanstalk setup has been acting strangely. The server would go down in a few minutes. In a t3.small, it goes down in every 50 minutes. The health tab says that the memory is exhausted or something. It will go "Severe" for about 5-10 minutes then goes back without me doing anything. Basically just a whole zigzag in the monitoring. In the t3.nano it goes down at approximately every 5 minutes.



      Here are some things that I've done that I suspect to be the cause




      • I've re-enabled pusher for broadcasting. The project has a pusher setup before and it was working fine. However, I disabled (removed all parts that uses it) as I don't need it yet. I re-enabled it and the problem occured

      • I've played with AWS WAF and Cloudfront. So I was studying those two parts before and played with some settings, however, I can't remember using any of them related to my EBS application. I did removed everything I've added on WAF and Cloudfront.


      Here are some facts:




      • Whenever I remove the container commands creating the schedule:run and queue:work, it becomes completely fine. Totally "OK" status even
        if I simulate sending hundreds of requests per second.

      • I tried scaling it to 3 instances and the result is still the same, however, the downtime just becomes slower

      • It gives a 503 error code whenever it's down

      • The setup for EBS is PHP 7.2 running on 64bit Amazon Linux/2.8.4

      • I'm testing the job and queue by sending 1 Pusher message every minute. It doesn't do anything except send the current time. This is
        also the only cronjob running.

      • The cronjob works and I can also receive the Pusher messages, except during downtime


      Here's an observation that I had with the logs
      - There's an "internal dummy connection" related to Apache. The time when it is logged is identical to the time that the downtime occurs.



      I've tried every hint on the logs, from juggling different settings on the cronjob and other possible causes. I've also tried asking my peers but no one has encountered such error before. In fact, they tested my cronjob and it's working properly for them.



      I also have this error in the /var/log/httpd/error_log



      [Fri Nov 23 19:07:35.208657 2018] [suexec:notice] [pid 3142] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
      [Fri Nov 23 19:07:35.228633 2018] [http2:warn] [pid 3142] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
      [Fri Nov 23 19:07:35.228644 2018] [http2:warn] [pid 3142] AH02951: mod_ssl does not seem to be enabled
      [Fri Nov 23 19:07:35.229188 2018] [lbmethod_heartbeat:notice] [pid 3142] AH02282: No slotmem from mod_heartmonitor
      [Fri Nov 23 19:07:35.267841 2018] [mpm_prefork:notice] [pid 3142] AH00163: Apache/2.4.34 (Amazon) configured -- resuming normal operations
      [Fri Nov 23 19:07:35.267860 2018] [core:notice] [pid 3142] AH00094: Command line: '/usr/sbin/httpd -D FOR










      share|improve this question














      I noticed lately that my laravel project in an AWS Elasticbeanstalk setup has been acting strangely. The server would go down in a few minutes. In a t3.small, it goes down in every 50 minutes. The health tab says that the memory is exhausted or something. It will go "Severe" for about 5-10 minutes then goes back without me doing anything. Basically just a whole zigzag in the monitoring. In the t3.nano it goes down at approximately every 5 minutes.



      Here are some things that I've done that I suspect to be the cause




      • I've re-enabled pusher for broadcasting. The project has a pusher setup before and it was working fine. However, I disabled (removed all parts that uses it) as I don't need it yet. I re-enabled it and the problem occured

      • I've played with AWS WAF and Cloudfront. So I was studying those two parts before and played with some settings, however, I can't remember using any of them related to my EBS application. I did removed everything I've added on WAF and Cloudfront.


      Here are some facts:




      • Whenever I remove the container commands creating the schedule:run and queue:work, it becomes completely fine. Totally "OK" status even
        if I simulate sending hundreds of requests per second.

      • I tried scaling it to 3 instances and the result is still the same, however, the downtime just becomes slower

      • It gives a 503 error code whenever it's down

      • The setup for EBS is PHP 7.2 running on 64bit Amazon Linux/2.8.4

      • I'm testing the job and queue by sending 1 Pusher message every minute. It doesn't do anything except send the current time. This is
        also the only cronjob running.

      • The cronjob works and I can also receive the Pusher messages, except during downtime


      Here's an observation that I had with the logs
      - There's an "internal dummy connection" related to Apache. The time when it is logged is identical to the time that the downtime occurs.



      I've tried every hint on the logs, from juggling different settings on the cronjob and other possible causes. I've also tried asking my peers but no one has encountered such error before. In fact, they tested my cronjob and it's working properly for them.



      I also have this error in the /var/log/httpd/error_log



      [Fri Nov 23 19:07:35.208657 2018] [suexec:notice] [pid 3142] AH01232: suEXEC mechanism enabled (wrapper: /usr/sbin/suexec)
      [Fri Nov 23 19:07:35.228633 2018] [http2:warn] [pid 3142] AH10034: The mpm module (prefork.c) is not supported by mod_http2. The mpm determines how things are processed in your server. HTTP/2 has more demands in this regard and the currently selected mpm will just not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol will be inactive.
      [Fri Nov 23 19:07:35.228644 2018] [http2:warn] [pid 3142] AH02951: mod_ssl does not seem to be enabled
      [Fri Nov 23 19:07:35.229188 2018] [lbmethod_heartbeat:notice] [pid 3142] AH02282: No slotmem from mod_heartmonitor
      [Fri Nov 23 19:07:35.267841 2018] [mpm_prefork:notice] [pid 3142] AH00163: Apache/2.4.34 (Amazon) configured -- resuming normal operations
      [Fri Nov 23 19:07:35.267860 2018] [core:notice] [pid 3142] AH00094: Command line: '/usr/sbin/httpd -D FOR







      apache amazon-web-services laravel-5 amazon-ec2 amazon-elastic-beanstalk






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 23 '18 at 19:33









      Jeremy LaysonJeremy Layson

      364




      364
























          1 Answer
          1






          active

          oldest

          votes


















          0














          This is a case of running into surprises with the CPU credit and throttling restrictions with t2/t3.* EC2 instances. 1 CPU credit allows a (t2/t3) instance to operate at 100% CPU for 1 minute. All t2/t3.* instance CPU credits are replenished at a constant rate per hour for running instances (this rate depends on the instance class). Hence, prolonged periods of load (above a certain threshold) will gradually deplete these credits, leading to the states that you have described above.



          It's advised to use higher tier instances (m3.medium and above) to sustain production workloads consistently. Placing a load balancer in front of multiple instances is also a great way to maintain availability.



          More information on the same can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html






          share|improve this answer


























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53452103%2faws-elasticbeanstalk-periodically-goes-down%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            This is a case of running into surprises with the CPU credit and throttling restrictions with t2/t3.* EC2 instances. 1 CPU credit allows a (t2/t3) instance to operate at 100% CPU for 1 minute. All t2/t3.* instance CPU credits are replenished at a constant rate per hour for running instances (this rate depends on the instance class). Hence, prolonged periods of load (above a certain threshold) will gradually deplete these credits, leading to the states that you have described above.



            It's advised to use higher tier instances (m3.medium and above) to sustain production workloads consistently. Placing a load balancer in front of multiple instances is also a great way to maintain availability.



            More information on the same can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html






            share|improve this answer






























              0














              This is a case of running into surprises with the CPU credit and throttling restrictions with t2/t3.* EC2 instances. 1 CPU credit allows a (t2/t3) instance to operate at 100% CPU for 1 minute. All t2/t3.* instance CPU credits are replenished at a constant rate per hour for running instances (this rate depends on the instance class). Hence, prolonged periods of load (above a certain threshold) will gradually deplete these credits, leading to the states that you have described above.



              It's advised to use higher tier instances (m3.medium and above) to sustain production workloads consistently. Placing a load balancer in front of multiple instances is also a great way to maintain availability.



              More information on the same can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html






              share|improve this answer




























                0












                0








                0







                This is a case of running into surprises with the CPU credit and throttling restrictions with t2/t3.* EC2 instances. 1 CPU credit allows a (t2/t3) instance to operate at 100% CPU for 1 minute. All t2/t3.* instance CPU credits are replenished at a constant rate per hour for running instances (this rate depends on the instance class). Hence, prolonged periods of load (above a certain threshold) will gradually deplete these credits, leading to the states that you have described above.



                It's advised to use higher tier instances (m3.medium and above) to sustain production workloads consistently. Placing a load balancer in front of multiple instances is also a great way to maintain availability.



                More information on the same can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html






                share|improve this answer















                This is a case of running into surprises with the CPU credit and throttling restrictions with t2/t3.* EC2 instances. 1 CPU credit allows a (t2/t3) instance to operate at 100% CPU for 1 minute. All t2/t3.* instance CPU credits are replenished at a constant rate per hour for running instances (this rate depends on the instance class). Hence, prolonged periods of load (above a certain threshold) will gradually deplete these credits, leading to the states that you have described above.



                It's advised to use higher tier instances (m3.medium and above) to sustain production workloads consistently. Placing a load balancer in front of multiple instances is also a great way to maintain availability.



                More information on the same can be found here: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/burstable-credits-baseline-concepts.html







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Dec 11 '18 at 16:45

























                answered Nov 28 '18 at 19:08









                Kunal NagpalKunal Nagpal

                1612211




                1612211
































                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53452103%2faws-elasticbeanstalk-periodically-goes-down%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

                    Alcedinidae

                    RAC Tourist Trophy