E. T. Jaynes' subjectivism vs measurement of distributions












2












$begingroup$


In his paper, E. T. Jaynes argues that entropy is a measure of our ignorance about a system. As such, the probability distribution of states ${p_k}$ has to be chosen in the most unbiased way, thus maximizing the entropy constrained to all the available information. This is a subjectivist point of view because treats probabilities as description of our ignorance, rather than as an intrinsic property of the system. He also claims that the reason statistical mechanics works is that the distributions are sharply peaked and, as long as the peak is at the correct position, its shape is not that relevant.



With the development of computers and experiments, however, now we are able to simulate distributions of states in a system or measure actual equilibrium fluctuations at a high resolution (with optical tweezers, for example). Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states. Measurements show that these are indeed the distributions that maximize entropy (at constant temperature, for instance, it's the Boltzmann distribution). How would a subjectivist argue then that the probability of states are due to our lack of information about the system? If I can measure those distributions, they look very objective to me.










share|cite|improve this question









$endgroup$

















    2












    $begingroup$


    In his paper, E. T. Jaynes argues that entropy is a measure of our ignorance about a system. As such, the probability distribution of states ${p_k}$ has to be chosen in the most unbiased way, thus maximizing the entropy constrained to all the available information. This is a subjectivist point of view because treats probabilities as description of our ignorance, rather than as an intrinsic property of the system. He also claims that the reason statistical mechanics works is that the distributions are sharply peaked and, as long as the peak is at the correct position, its shape is not that relevant.



    With the development of computers and experiments, however, now we are able to simulate distributions of states in a system or measure actual equilibrium fluctuations at a high resolution (with optical tweezers, for example). Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states. Measurements show that these are indeed the distributions that maximize entropy (at constant temperature, for instance, it's the Boltzmann distribution). How would a subjectivist argue then that the probability of states are due to our lack of information about the system? If I can measure those distributions, they look very objective to me.










    share|cite|improve this question









    $endgroup$















      2












      2








      2





      $begingroup$


      In his paper, E. T. Jaynes argues that entropy is a measure of our ignorance about a system. As such, the probability distribution of states ${p_k}$ has to be chosen in the most unbiased way, thus maximizing the entropy constrained to all the available information. This is a subjectivist point of view because treats probabilities as description of our ignorance, rather than as an intrinsic property of the system. He also claims that the reason statistical mechanics works is that the distributions are sharply peaked and, as long as the peak is at the correct position, its shape is not that relevant.



      With the development of computers and experiments, however, now we are able to simulate distributions of states in a system or measure actual equilibrium fluctuations at a high resolution (with optical tweezers, for example). Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states. Measurements show that these are indeed the distributions that maximize entropy (at constant temperature, for instance, it's the Boltzmann distribution). How would a subjectivist argue then that the probability of states are due to our lack of information about the system? If I can measure those distributions, they look very objective to me.










      share|cite|improve this question









      $endgroup$




      In his paper, E. T. Jaynes argues that entropy is a measure of our ignorance about a system. As such, the probability distribution of states ${p_k}$ has to be chosen in the most unbiased way, thus maximizing the entropy constrained to all the available information. This is a subjectivist point of view because treats probabilities as description of our ignorance, rather than as an intrinsic property of the system. He also claims that the reason statistical mechanics works is that the distributions are sharply peaked and, as long as the peak is at the correct position, its shape is not that relevant.



      With the development of computers and experiments, however, now we are able to simulate distributions of states in a system or measure actual equilibrium fluctuations at a high resolution (with optical tweezers, for example). Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states. Measurements show that these are indeed the distributions that maximize entropy (at constant temperature, for instance, it's the Boltzmann distribution). How would a subjectivist argue then that the probability of states are due to our lack of information about the system? If I can measure those distributions, they look very objective to me.







      statistical-mechanics entropy probability information






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 8 hours ago









      BotondBotond

      1203




      1203






















          1 Answer
          1






          active

          oldest

          votes


















          4












          $begingroup$


          Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.




          This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
            $endgroup$
            – Botond
            5 hours ago






          • 1




            $begingroup$
            The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
            $endgroup$
            – Ján Lalinský
            4 hours ago













          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "151"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f461038%2fe-t-jaynes-subjectivism-vs-measurement-of-distributions%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          4












          $begingroup$


          Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.




          This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
            $endgroup$
            – Botond
            5 hours ago






          • 1




            $begingroup$
            The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
            $endgroup$
            – Ján Lalinský
            4 hours ago


















          4












          $begingroup$


          Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.




          This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.






          share|cite|improve this answer









          $endgroup$













          • $begingroup$
            I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
            $endgroup$
            – Botond
            5 hours ago






          • 1




            $begingroup$
            The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
            $endgroup$
            – Ján Lalinský
            4 hours ago
















          4












          4








          4





          $begingroup$


          Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.




          This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.






          share|cite|improve this answer









          $endgroup$




          Going beyond macroscopic quantities, thus, we can simulate/measure actual probability distributions of states.




          This is a misunderstanding. One never measures probability, the verb does not apply to the noun. In such simulations/calculations one may record some numbers, such as number of times the system was found in some region of phase space (or number of times system assumed some definite microstate). Such numbers can be divided by total number of observations or total number of time points, but this only gives frequency of occurences in that simulation, an artefact that depends on initial condition that may not repeat itself with different initial condition. It can serve as estimate of the probability, but itself is not the probability, which is supposed to abstract from details such as the initial condition. Jaynes provides a coherent way to think about the probability and a way to find probabilities in a number of cases of interest in statistical physics, using the maximum information entropy principle. Of course, one should test, if possible, usefulness of so determined probabilities, for example through computer simulations of concrete cases.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered 6 hours ago









          Ján LalinskýJán Lalinský

          15.1k1335




          15.1k1335












          • $begingroup$
            I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
            $endgroup$
            – Botond
            5 hours ago






          • 1




            $begingroup$
            The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
            $endgroup$
            – Ján Lalinský
            4 hours ago




















          • $begingroup$
            I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
            $endgroup$
            – Botond
            5 hours ago






          • 1




            $begingroup$
            The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
            $endgroup$
            – Ján Lalinský
            4 hours ago


















          $begingroup$
          I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
          $endgroup$
          – Botond
          5 hours ago




          $begingroup$
          I agree that all you can measure is frequency of occurences. At equilibrium, however, these frequencies converge to well defined values as you increase the number of measurements, independently of the initial conditions. I'm biased because I was trained in the objectivist spirit, but I'm still having hard time to see why it's obvious to measure frequencies that are precisely the same as the "working probabilities" "guessed" via the maximum entropy principle.
          $endgroup$
          – Botond
          5 hours ago




          1




          1




          $begingroup$
          The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
          $endgroup$
          – Ján Lalinský
          4 hours ago






          $begingroup$
          The match between observed frequencies and maxent probabilities isn't obvious, and it isn't always the case. However, it just is the most probable thing to happen, if all available knowledge was taken into consideration when predicting the probabilities. It is like with the law of large numbers: there is no guarantee that statistics on large number of experiments will show agreement with the probability derivation, but if the derivation is right, it is very probable.
          $endgroup$
          – Ján Lalinský
          4 hours ago




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Physics Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fphysics.stackexchange.com%2fquestions%2f461038%2fe-t-jaynes-subjectivism-vs-measurement-of-distributions%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

          Alcedinidae

          RAC Tourist Trophy