Does regularization penalize models that are simpler than needed?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







3












$begingroup$


Yes, regularization penalizes models that are more complex than needed. But does it also penalize models that are simpler than needed?










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    Given we use an appropriate testing procedure to select our regularisation parameter strength, it should not penalise any models unnecessarily. (+1)
    $endgroup$
    – usεr11852
    Mar 31 at 12:04


















3












$begingroup$


Yes, regularization penalizes models that are more complex than needed. But does it also penalize models that are simpler than needed?










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    Given we use an appropriate testing procedure to select our regularisation parameter strength, it should not penalise any models unnecessarily. (+1)
    $endgroup$
    – usεr11852
    Mar 31 at 12:04














3












3








3





$begingroup$


Yes, regularization penalizes models that are more complex than needed. But does it also penalize models that are simpler than needed?










share|cite|improve this question









$endgroup$




Yes, regularization penalizes models that are more complex than needed. But does it also penalize models that are simpler than needed?







machine-learning predictive-models modeling regularization






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Mar 31 at 11:44









alienflowalienflow

275




275








  • 1




    $begingroup$
    Given we use an appropriate testing procedure to select our regularisation parameter strength, it should not penalise any models unnecessarily. (+1)
    $endgroup$
    – usεr11852
    Mar 31 at 12:04














  • 1




    $begingroup$
    Given we use an appropriate testing procedure to select our regularisation parameter strength, it should not penalise any models unnecessarily. (+1)
    $endgroup$
    – usεr11852
    Mar 31 at 12:04








1




1




$begingroup$
Given we use an appropriate testing procedure to select our regularisation parameter strength, it should not penalise any models unnecessarily. (+1)
$endgroup$
– usεr11852
Mar 31 at 12:04




$begingroup$
Given we use an appropriate testing procedure to select our regularisation parameter strength, it should not penalise any models unnecessarily. (+1)
$endgroup$
– usεr11852
Mar 31 at 12:04










1 Answer
1






active

oldest

votes


















5












$begingroup$

For regularization terms similar to $left|thetaright|_2^2$ in effect, no they don't, they only push toward simplicity, i.e. parameters closer to zero.



Error terms such as $sum_i left|y_i - f_{theta}(x_i)right|_2^2$ are responsible for fighting back toward complexity (penalizing over-simplification), since the simplest model, i.e. $theta = 0$, leads to a high error.



We balance these two forces by using a regularization parameter ($lambda$) in a summation like
$$frac{1}{N}sum_{i=1}^{N} left|y_i - f_{theta}(x_i)right|_2^2 + lambdaleft|thetaright|_2^2,$$
where higher $lambda$ forces the model toward more simplicity.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    So, regularization like L2, L1 correspond to the first case, right?
    $endgroup$
    – alienflow
    Mar 31 at 12:05






  • 1




    $begingroup$
    @alienflow yes they all force toward zero (most simple).
    $endgroup$
    – Esmailian
    Mar 31 at 12:06












Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400388%2fdoes-regularization-penalize-models-that-are-simpler-than-needed%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









5












$begingroup$

For regularization terms similar to $left|thetaright|_2^2$ in effect, no they don't, they only push toward simplicity, i.e. parameters closer to zero.



Error terms such as $sum_i left|y_i - f_{theta}(x_i)right|_2^2$ are responsible for fighting back toward complexity (penalizing over-simplification), since the simplest model, i.e. $theta = 0$, leads to a high error.



We balance these two forces by using a regularization parameter ($lambda$) in a summation like
$$frac{1}{N}sum_{i=1}^{N} left|y_i - f_{theta}(x_i)right|_2^2 + lambdaleft|thetaright|_2^2,$$
where higher $lambda$ forces the model toward more simplicity.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    So, regularization like L2, L1 correspond to the first case, right?
    $endgroup$
    – alienflow
    Mar 31 at 12:05






  • 1




    $begingroup$
    @alienflow yes they all force toward zero (most simple).
    $endgroup$
    – Esmailian
    Mar 31 at 12:06
















5












$begingroup$

For regularization terms similar to $left|thetaright|_2^2$ in effect, no they don't, they only push toward simplicity, i.e. parameters closer to zero.



Error terms such as $sum_i left|y_i - f_{theta}(x_i)right|_2^2$ are responsible for fighting back toward complexity (penalizing over-simplification), since the simplest model, i.e. $theta = 0$, leads to a high error.



We balance these two forces by using a regularization parameter ($lambda$) in a summation like
$$frac{1}{N}sum_{i=1}^{N} left|y_i - f_{theta}(x_i)right|_2^2 + lambdaleft|thetaright|_2^2,$$
where higher $lambda$ forces the model toward more simplicity.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    So, regularization like L2, L1 correspond to the first case, right?
    $endgroup$
    – alienflow
    Mar 31 at 12:05






  • 1




    $begingroup$
    @alienflow yes they all force toward zero (most simple).
    $endgroup$
    – Esmailian
    Mar 31 at 12:06














5












5








5





$begingroup$

For regularization terms similar to $left|thetaright|_2^2$ in effect, no they don't, they only push toward simplicity, i.e. parameters closer to zero.



Error terms such as $sum_i left|y_i - f_{theta}(x_i)right|_2^2$ are responsible for fighting back toward complexity (penalizing over-simplification), since the simplest model, i.e. $theta = 0$, leads to a high error.



We balance these two forces by using a regularization parameter ($lambda$) in a summation like
$$frac{1}{N}sum_{i=1}^{N} left|y_i - f_{theta}(x_i)right|_2^2 + lambdaleft|thetaright|_2^2,$$
where higher $lambda$ forces the model toward more simplicity.






share|cite|improve this answer











$endgroup$



For regularization terms similar to $left|thetaright|_2^2$ in effect, no they don't, they only push toward simplicity, i.e. parameters closer to zero.



Error terms such as $sum_i left|y_i - f_{theta}(x_i)right|_2^2$ are responsible for fighting back toward complexity (penalizing over-simplification), since the simplest model, i.e. $theta = 0$, leads to a high error.



We balance these two forces by using a regularization parameter ($lambda$) in a summation like
$$frac{1}{N}sum_{i=1}^{N} left|y_i - f_{theta}(x_i)right|_2^2 + lambdaleft|thetaright|_2^2,$$
where higher $lambda$ forces the model toward more simplicity.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Mar 31 at 12:10

























answered Mar 31 at 12:00









EsmailianEsmailian

42615




42615












  • $begingroup$
    So, regularization like L2, L1 correspond to the first case, right?
    $endgroup$
    – alienflow
    Mar 31 at 12:05






  • 1




    $begingroup$
    @alienflow yes they all force toward zero (most simple).
    $endgroup$
    – Esmailian
    Mar 31 at 12:06


















  • $begingroup$
    So, regularization like L2, L1 correspond to the first case, right?
    $endgroup$
    – alienflow
    Mar 31 at 12:05






  • 1




    $begingroup$
    @alienflow yes they all force toward zero (most simple).
    $endgroup$
    – Esmailian
    Mar 31 at 12:06
















$begingroup$
So, regularization like L2, L1 correspond to the first case, right?
$endgroup$
– alienflow
Mar 31 at 12:05




$begingroup$
So, regularization like L2, L1 correspond to the first case, right?
$endgroup$
– alienflow
Mar 31 at 12:05




1




1




$begingroup$
@alienflow yes they all force toward zero (most simple).
$endgroup$
– Esmailian
Mar 31 at 12:06




$begingroup$
@alienflow yes they all force toward zero (most simple).
$endgroup$
– Esmailian
Mar 31 at 12:06


















draft saved

draft discarded




















































Thanks for contributing an answer to Cross Validated!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f400388%2fdoes-regularization-penalize-models-that-are-simpler-than-needed%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

"Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

Alcedinidae

RAC Tourist Trophy