How are problems classified in Complexity Theory?












1












$begingroup$


I'm reading Sipser's Introduction to the Theory of Computation (3rd edition). In chapter 0 (pg. 2), he says we don't know the answer to "what makes some problems computationally hard and others easy," however, he then states that "researchers have
discovered an elegant scheme for classifying problems according to their computational difficulty. Using this scheme, we can demonstrate
a method for giving evidence that certain problems are computationally hard,
even if we are unable to prove that they are.
"



So my question is: HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?



Also, what/where is this "scheme" that does this classifying. (I did some googling and couldn't find anything)










share|cite|improve this question









$endgroup$

















    1












    $begingroup$


    I'm reading Sipser's Introduction to the Theory of Computation (3rd edition). In chapter 0 (pg. 2), he says we don't know the answer to "what makes some problems computationally hard and others easy," however, he then states that "researchers have
    discovered an elegant scheme for classifying problems according to their computational difficulty. Using this scheme, we can demonstrate
    a method for giving evidence that certain problems are computationally hard,
    even if we are unable to prove that they are.
    "



    So my question is: HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?



    Also, what/where is this "scheme" that does this classifying. (I did some googling and couldn't find anything)










    share|cite|improve this question









    $endgroup$















      1












      1








      1





      $begingroup$


      I'm reading Sipser's Introduction to the Theory of Computation (3rd edition). In chapter 0 (pg. 2), he says we don't know the answer to "what makes some problems computationally hard and others easy," however, he then states that "researchers have
      discovered an elegant scheme for classifying problems according to their computational difficulty. Using this scheme, we can demonstrate
      a method for giving evidence that certain problems are computationally hard,
      even if we are unable to prove that they are.
      "



      So my question is: HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?



      Also, what/where is this "scheme" that does this classifying. (I did some googling and couldn't find anything)










      share|cite|improve this question









      $endgroup$




      I'm reading Sipser's Introduction to the Theory of Computation (3rd edition). In chapter 0 (pg. 2), he says we don't know the answer to "what makes some problems computationally hard and others easy," however, he then states that "researchers have
      discovered an elegant scheme for classifying problems according to their computational difficulty. Using this scheme, we can demonstrate
      a method for giving evidence that certain problems are computationally hard,
      even if we are unable to prove that they are.
      "



      So my question is: HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?



      Also, what/where is this "scheme" that does this classifying. (I did some googling and couldn't find anything)







      complexity-theory






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Mar 29 at 9:32









      Johan von AddenJohan von Adden

      82




      82






















          3 Answers
          3






          active

          oldest

          votes


















          9












          $begingroup$

          That's what you get when you distill a whole bunch of theory to a wider audience.



          In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.



          The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
            $endgroup$
            – Johan von Adden
            Mar 29 at 10:12



















          2












          $begingroup$


          HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?




          I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.



          I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.




          Also, what/where is this "scheme"




          Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.






          share|cite|improve this answer











          $endgroup$





















            1












            $begingroup$

            The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.






            share|cite|improve this answer









            $endgroup$














              Your Answer





              StackExchange.ifUsing("editor", function () {
              return StackExchange.using("mathjaxEditing", function () {
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              });
              });
              }, "mathjax-editing");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "419"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106207%2fhow-are-problems-classified-in-complexity-theory%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              3 Answers
              3






              active

              oldest

              votes








              3 Answers
              3






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              9












              $begingroup$

              That's what you get when you distill a whole bunch of theory to a wider audience.



              In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.



              The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
                $endgroup$
                – Johan von Adden
                Mar 29 at 10:12
















              9












              $begingroup$

              That's what you get when you distill a whole bunch of theory to a wider audience.



              In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.



              The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).






              share|cite|improve this answer











              $endgroup$













              • $begingroup$
                I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
                $endgroup$
                – Johan von Adden
                Mar 29 at 10:12














              9












              9








              9





              $begingroup$

              That's what you get when you distill a whole bunch of theory to a wider audience.



              In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.



              The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).






              share|cite|improve this answer











              $endgroup$



              That's what you get when you distill a whole bunch of theory to a wider audience.



              In his book, Sipser addresses a general audience at the undergraduate level, possibly with no notion of computability theory; hence, he can only hint at concepts which are to be given a more formal treatment later on in the book. The part you cite is from chapter 0 (i.e., not really a chapter), whereas the material for complexity theory only appears at the end (i.e., part three). This is why the passage is so fuzzy. Most likely it is intended only as motivation and to give a broad overview for the topics to be covered in the book.



              The "scheme" Sipser is talking about are reductions. If we know a problem $A$ is reducible to a problem $B$, then we know $B$ is at least as hard as $A$. (Incidentally, this is also why it is common practice to denote reductions with a "$le$" sign.) This gives us a way of ordering problems according to their difficulty, at least for those having reductions we are aware of. As Sipser states, though, by using only reductions "we are unable to prove" whether the problems are really hard or not; reductions only give us relative, not absolute notions of hardness. This is why separation results are still rare in modern complexity theory: We have a bunch of reduction (e.g., NP-completeness) results, but only a handful of separation results (e.g., the time and space hierarchy theorems).







              share|cite|improve this answer














              share|cite|improve this answer



              share|cite|improve this answer








              edited Mar 29 at 19:53

























              answered Mar 29 at 9:56









              dkaeaedkaeae

              2,3621922




              2,3621922












              • $begingroup$
                I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
                $endgroup$
                – Johan von Adden
                Mar 29 at 10:12


















              • $begingroup$
                I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
                $endgroup$
                – Johan von Adden
                Mar 29 at 10:12
















              $begingroup$
              I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
              $endgroup$
              – Johan von Adden
              Mar 29 at 10:12




              $begingroup$
              I appreciate the thorough answer. Vincenzo (one of the commentors) mentioned that Sipser discusses this in Ch 5 & 7, which I'll hopefully get to eventually!
              $endgroup$
              – Johan von Adden
              Mar 29 at 10:12











              2












              $begingroup$


              HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?




              I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.



              I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.




              Also, what/where is this "scheme"




              Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.






              share|cite|improve this answer











              $endgroup$


















                2












                $begingroup$


                HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?




                I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.



                I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.




                Also, what/where is this "scheme"




                Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.






                share|cite|improve this answer











                $endgroup$
















                  2












                  2








                  2





                  $begingroup$


                  HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?




                  I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.



                  I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.




                  Also, what/where is this "scheme"




                  Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.






                  share|cite|improve this answer











                  $endgroup$




                  HOW is it possible to classify problems according to their computational difficulty, if we don't even know what makes a problem computationally easy/hard in the first place?




                  I think the point that the piece is trying to make is that we know how to determine whether individual problems are easy or hard, even though we don't have an over-arching theory of why the hard ones are hard and the easy ones are easy. Just like you can classify people according to their weight, even though you don't know why they have the weight they do.



                  I should emphasise that in most cases, "hard" means "seem to be hard". You've probably heard of NP-complete problems. We don't know for certain that these problems have no efficient algorithm (by the standard definition of "efficient") but nobody has been able find an efficient algorithm for any of them in nearly 50 years of trying, and finding an efficient algorithm for just one of them would give efficient algorithms for all of them.




                  Also, what/where is this "scheme"




                  Complexity classes, the relationships between them, and the concept of reductions for transforming one problem into another.







                  share|cite|improve this answer














                  share|cite|improve this answer



                  share|cite|improve this answer








                  edited Mar 29 at 16:27

























                  answered Mar 29 at 14:12









                  David RicherbyDavid Richerby

                  70.2k15107196




                  70.2k15107196























                      1












                      $begingroup$

                      The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.






                      share|cite|improve this answer









                      $endgroup$


















                        1












                        $begingroup$

                        The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.






                        share|cite|improve this answer









                        $endgroup$
















                          1












                          1








                          1





                          $begingroup$

                          The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.






                          share|cite|improve this answer









                          $endgroup$



                          The "scheme" is based on the ideas of reductions among problems and completeness of problems, which are described in Chapters 5 and 7 of Sipser's book.







                          share|cite|improve this answer












                          share|cite|improve this answer



                          share|cite|improve this answer










                          answered Mar 29 at 9:56









                          VincenzoVincenzo

                          2,0651614




                          2,0651614






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Computer Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcs.stackexchange.com%2fquestions%2f106207%2fhow-are-problems-classified-in-complexity-theory%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              If I really need a card on my start hand, how many mulligans make sense? [duplicate]

                              Alcedinidae

                              Can an atomic nucleus contain both particles and antiparticles? [duplicate]