Creating thinned models in during Dropout process












4












$begingroup$



Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that survived dropout. A neural net with n units can be seen as a collection of 2^n possible thinned neural networks.




Source:
Dropout: A Simple Way to Prevent Neural Networks fromOverfitting, pg. 1931.



How are we getting these 2^n models?










share|cite|improve this question









New contributor




ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    4












    $begingroup$



    Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that survived dropout. A neural net with n units can be seen as a collection of 2^n possible thinned neural networks.




    Source:
    Dropout: A Simple Way to Prevent Neural Networks fromOverfitting, pg. 1931.



    How are we getting these 2^n models?










    share|cite|improve this question









    New contributor




    ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      4












      4








      4





      $begingroup$



      Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that survived dropout. A neural net with n units can be seen as a collection of 2^n possible thinned neural networks.




      Source:
      Dropout: A Simple Way to Prevent Neural Networks fromOverfitting, pg. 1931.



      How are we getting these 2^n models?










      share|cite|improve this question









      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$





      Applying dropout to a neural network amounts to sampling a “thinned” network from it. The thinned network consists of all the units that survived dropout. A neural net with n units can be seen as a collection of 2^n possible thinned neural networks.




      Source:
      Dropout: A Simple Way to Prevent Neural Networks fromOverfitting, pg. 1931.



      How are we getting these 2^n models?







      machine-learning deep-learning dropout






      share|cite|improve this question









      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question









      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question








      edited Mar 18 at 13:41









      Djib2011

      2,58931125




      2,58931125






      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked Mar 18 at 12:04









      ashirwadashirwad

      213




      213




      New contributor




      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      ashirwad is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          2 Answers
          2






          active

          oldest

          votes


















          4












          $begingroup$

          The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



          The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



          Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.






          share|cite|improve this answer











          $endgroup$





















            0












            $begingroup$

            I too haven't understood their reasoning, I always assumed it was a typo or something...



            The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



            $$
            frac{n!}{r! cdot (n-r)!}
            $$



            possible combinations (not $2^n$ as the authors state).





            Example:



            Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



            Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



            Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:




            1. $h_1, h_2$

            2. $h_1, h_3$

            3. $h_1, h_4$

            4. $h_2, h_3$

            5. $h_2, h_4$

            6. $h_3, h_4$


            or by applying the formula:



            $$
            frac{4!}{2! cdot (4-2)!} = frac{24}{2 cdot 2} = 6
            $$






            share|cite|improve this answer









            $endgroup$









            • 3




              $begingroup$
              I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
              $endgroup$
              – Daniel López
              Mar 18 at 14:00






            • 1




              $begingroup$
              @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
              $endgroup$
              – usεr11852
              Mar 18 at 14:03








            • 2




              $begingroup$
              Agree, but I believe the above example is transmitting the idea that exactly $n cdot text{prob}$ units are disabled with dropout, where $text{prob}$ is the dropout probability. And this is not how dropout works.
              $endgroup$
              – Daniel López
              Mar 18 at 14:09












            • $begingroup$
              Well... LLN is our friend. :)
              $endgroup$
              – usεr11852
              Mar 18 at 14:14










            • $begingroup$
              The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
              $endgroup$
              – Sycorax
              Mar 19 at 2:36













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "65"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            ashirwad is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398114%2fcreating-thinned-models-in-during-dropout-process%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4












            $begingroup$

            The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



            The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



            Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.






            share|cite|improve this answer











            $endgroup$


















              4












              $begingroup$

              The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



              The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



              Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.






              share|cite|improve this answer











              $endgroup$
















                4












                4








                4





                $begingroup$

                The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



                The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



                Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.






                share|cite|improve this answer











                $endgroup$



                The statement is a bit oversimplifying but the idea is that assuming we have $n$ nodes and each of these nodes might be "dropped", we have $2^n$ possible thinned neural networks. Obviously dropping out an entire layer would alter the whole structure of the network but the idea is straightforward: we ignore the activation/information from certain randomly selected neurons and thus encourage redundancy learning and discourage over-fitting on very specific features.



                The same idea has also been employed in Gradient Boosting Machines where instead of "ignoring neurons" we "ignore trees" at random (see Rashmi & Gilad-Bachrach (2015) DART: Dropouts meet Multiple Additive Regression Trees on that matter).



                Minor edit: I just saw Djib2011's answer. (+1) He/she specifically shows why the statement is somewhat over-simplifying. If we assume that we can drop any (or all, or none) of the neurons we have $2^n$ possible networks.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Mar 18 at 14:00

























                answered Mar 18 at 13:55









                usεr11852usεr11852

                19.4k14274




                19.4k14274

























                    0












                    $begingroup$

                    I too haven't understood their reasoning, I always assumed it was a typo or something...



                    The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



                    $$
                    frac{n!}{r! cdot (n-r)!}
                    $$



                    possible combinations (not $2^n$ as the authors state).





                    Example:



                    Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



                    Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



                    Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:




                    1. $h_1, h_2$

                    2. $h_1, h_3$

                    3. $h_1, h_4$

                    4. $h_2, h_3$

                    5. $h_2, h_4$

                    6. $h_3, h_4$


                    or by applying the formula:



                    $$
                    frac{4!}{2! cdot (4-2)!} = frac{24}{2 cdot 2} = 6
                    $$






                    share|cite|improve this answer









                    $endgroup$









                    • 3




                      $begingroup$
                      I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                      $endgroup$
                      – Daniel López
                      Mar 18 at 14:00






                    • 1




                      $begingroup$
                      @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                      $endgroup$
                      – usεr11852
                      Mar 18 at 14:03








                    • 2




                      $begingroup$
                      Agree, but I believe the above example is transmitting the idea that exactly $n cdot text{prob}$ units are disabled with dropout, where $text{prob}$ is the dropout probability. And this is not how dropout works.
                      $endgroup$
                      – Daniel López
                      Mar 18 at 14:09












                    • $begingroup$
                      Well... LLN is our friend. :)
                      $endgroup$
                      – usεr11852
                      Mar 18 at 14:14










                    • $begingroup$
                      The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                      $endgroup$
                      – Sycorax
                      Mar 19 at 2:36


















                    0












                    $begingroup$

                    I too haven't understood their reasoning, I always assumed it was a typo or something...



                    The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



                    $$
                    frac{n!}{r! cdot (n-r)!}
                    $$



                    possible combinations (not $2^n$ as the authors state).





                    Example:



                    Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



                    Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



                    Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:




                    1. $h_1, h_2$

                    2. $h_1, h_3$

                    3. $h_1, h_4$

                    4. $h_2, h_3$

                    5. $h_2, h_4$

                    6. $h_3, h_4$


                    or by applying the formula:



                    $$
                    frac{4!}{2! cdot (4-2)!} = frac{24}{2 cdot 2} = 6
                    $$






                    share|cite|improve this answer









                    $endgroup$









                    • 3




                      $begingroup$
                      I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                      $endgroup$
                      – Daniel López
                      Mar 18 at 14:00






                    • 1




                      $begingroup$
                      @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                      $endgroup$
                      – usεr11852
                      Mar 18 at 14:03








                    • 2




                      $begingroup$
                      Agree, but I believe the above example is transmitting the idea that exactly $n cdot text{prob}$ units are disabled with dropout, where $text{prob}$ is the dropout probability. And this is not how dropout works.
                      $endgroup$
                      – Daniel López
                      Mar 18 at 14:09












                    • $begingroup$
                      Well... LLN is our friend. :)
                      $endgroup$
                      – usεr11852
                      Mar 18 at 14:14










                    • $begingroup$
                      The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                      $endgroup$
                      – Sycorax
                      Mar 19 at 2:36
















                    0












                    0








                    0





                    $begingroup$

                    I too haven't understood their reasoning, I always assumed it was a typo or something...



                    The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



                    $$
                    frac{n!}{r! cdot (n-r)!}
                    $$



                    possible combinations (not $2^n$ as the authors state).





                    Example:



                    Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



                    Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



                    Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:




                    1. $h_1, h_2$

                    2. $h_1, h_3$

                    3. $h_1, h_4$

                    4. $h_2, h_3$

                    5. $h_2, h_4$

                    6. $h_3, h_4$


                    or by applying the formula:



                    $$
                    frac{4!}{2! cdot (4-2)!} = frac{24}{2 cdot 2} = 6
                    $$






                    share|cite|improve this answer









                    $endgroup$



                    I too haven't understood their reasoning, I always assumed it was a typo or something...



                    The way I see it we if we have $n$ hidden units in a Neural Network with a single hidden layer and we apply dropout keeping $r$ of those, we'll have:



                    $$
                    frac{n!}{r! cdot (n-r)!}
                    $$



                    possible combinations (not $2^n$ as the authors state).





                    Example:



                    Assume a simple fully connected neural network with a single hidden layer with 4 neurons. This means the hidden layer will have 4 outputs $h_1, h_2, h_3, h_4$.



                    Now, you want to apply dropout to this layer with a 0.5 probability (i.e. half of the outputs will be dropped).



                    Since 2 out of the 4 outputs will be dropped, at each training iteration we'll have one of the following possibilities:




                    1. $h_1, h_2$

                    2. $h_1, h_3$

                    3. $h_1, h_4$

                    4. $h_2, h_3$

                    5. $h_2, h_4$

                    6. $h_3, h_4$


                    or by applying the formula:



                    $$
                    frac{4!}{2! cdot (4-2)!} = frac{24}{2 cdot 2} = 6
                    $$







                    share|cite|improve this answer












                    share|cite|improve this answer



                    share|cite|improve this answer










                    answered Mar 18 at 13:51









                    Djib2011Djib2011

                    2,58931125




                    2,58931125








                    • 3




                      $begingroup$
                      I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                      $endgroup$
                      – Daniel López
                      Mar 18 at 14:00






                    • 1




                      $begingroup$
                      @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                      $endgroup$
                      – usεr11852
                      Mar 18 at 14:03








                    • 2




                      $begingroup$
                      Agree, but I believe the above example is transmitting the idea that exactly $n cdot text{prob}$ units are disabled with dropout, where $text{prob}$ is the dropout probability. And this is not how dropout works.
                      $endgroup$
                      – Daniel López
                      Mar 18 at 14:09












                    • $begingroup$
                      Well... LLN is our friend. :)
                      $endgroup$
                      – usεr11852
                      Mar 18 at 14:14










                    • $begingroup$
                      The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                      $endgroup$
                      – Sycorax
                      Mar 19 at 2:36
















                    • 3




                      $begingroup$
                      I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                      $endgroup$
                      – Daniel López
                      Mar 18 at 14:00






                    • 1




                      $begingroup$
                      @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                      $endgroup$
                      – usεr11852
                      Mar 18 at 14:03








                    • 2




                      $begingroup$
                      Agree, but I believe the above example is transmitting the idea that exactly $n cdot text{prob}$ units are disabled with dropout, where $text{prob}$ is the dropout probability. And this is not how dropout works.
                      $endgroup$
                      – Daniel López
                      Mar 18 at 14:09












                    • $begingroup$
                      Well... LLN is our friend. :)
                      $endgroup$
                      – usεr11852
                      Mar 18 at 14:14










                    • $begingroup$
                      The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                      $endgroup$
                      – Sycorax
                      Mar 19 at 2:36










                    3




                    3




                    $begingroup$
                    I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                    $endgroup$
                    – Daniel López
                    Mar 18 at 14:00




                    $begingroup$
                    I do not think that most implementations of dropout work by saying: If there are 100 neurons, and the probability is 0.05, I have to disable exactly 5 neurons chosen at random. Instead each neuron is disabled with a probability of 0.05, independently of what happens with the rest. Hence, the cases where all or no neurons are disabled, while unlikely, are possible.
                    $endgroup$
                    – Daniel López
                    Mar 18 at 14:00




                    1




                    1




                    $begingroup$
                    @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                    $endgroup$
                    – usεr11852
                    Mar 18 at 14:03






                    $begingroup$
                    @DanielLópez: I think both you and Djib2011 (+1 both) are "factually correct" on this. The statement is clearly oversimplifying things. You also need to take into account that most the networks that this paper is concerned with, have thousands of neurons so certain it is kind of accepted that no layer will be totally switched off.
                    $endgroup$
                    – usεr11852
                    Mar 18 at 14:03






                    2




                    2




                    $begingroup$
                    Agree, but I believe the above example is transmitting the idea that exactly $n cdot text{prob}$ units are disabled with dropout, where $text{prob}$ is the dropout probability. And this is not how dropout works.
                    $endgroup$
                    – Daniel López
                    Mar 18 at 14:09






                    $begingroup$
                    Agree, but I believe the above example is transmitting the idea that exactly $n cdot text{prob}$ units are disabled with dropout, where $text{prob}$ is the dropout probability. And this is not how dropout works.
                    $endgroup$
                    – Daniel López
                    Mar 18 at 14:09














                    $begingroup$
                    Well... LLN is our friend. :)
                    $endgroup$
                    – usεr11852
                    Mar 18 at 14:14




                    $begingroup$
                    Well... LLN is our friend. :)
                    $endgroup$
                    – usεr11852
                    Mar 18 at 14:14












                    $begingroup$
                    The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                    $endgroup$
                    – Sycorax
                    Mar 19 at 2:36






                    $begingroup$
                    The flaw with the reasoning presented here is that dropout sets weights to 0 with some fixed probability independently. This implies that the number of zero weights at each step has a binomial distribution, because dropout has the three defining characteristics of a binomial distribution 1 dichotomous outcomes (weights are on or off) 2 fixed number of trials (number of weights in the model doesn't change) 3 probability of success is fixed & independent for each trial.
                    $endgroup$
                    – Sycorax
                    Mar 19 at 2:36












                    ashirwad is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    ashirwad is a new contributor. Be nice, and check out our Code of Conduct.













                    ashirwad is a new contributor. Be nice, and check out our Code of Conduct.












                    ashirwad is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398114%2fcreating-thinned-models-in-during-dropout-process%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    If I really need a card on my start hand, how many mulligans make sense? [duplicate]

                    Alcedinidae

                    Can an atomic nucleus contain both particles and antiparticles? [duplicate]