How to set the upper bound on a scrapy spider ReturnsContract











up vote
1
down vote

favorite












I want to limit the number of items I find in each pages.



I found this documentation that seems to fit what I need:



class scrapy.contracts.default.ReturnsContract

This contract (@returns) sets lower and upper bounds for the items and
requests returned by the spider. The upper bound is optional:

@returns item(s)|request(s) [min [max]]


But I don't understand how to use this class. In my spider, I tried to add



ReturnsContract.__setattr__("max",10)


But it didn't work. Am I missing something?










share|improve this question


























    up vote
    1
    down vote

    favorite












    I want to limit the number of items I find in each pages.



    I found this documentation that seems to fit what I need:



    class scrapy.contracts.default.ReturnsContract

    This contract (@returns) sets lower and upper bounds for the items and
    requests returned by the spider. The upper bound is optional:

    @returns item(s)|request(s) [min [max]]


    But I don't understand how to use this class. In my spider, I tried to add



    ReturnsContract.__setattr__("max",10)


    But it didn't work. Am I missing something?










    share|improve this question
























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      I want to limit the number of items I find in each pages.



      I found this documentation that seems to fit what I need:



      class scrapy.contracts.default.ReturnsContract

      This contract (@returns) sets lower and upper bounds for the items and
      requests returned by the spider. The upper bound is optional:

      @returns item(s)|request(s) [min [max]]


      But I don't understand how to use this class. In my spider, I tried to add



      ReturnsContract.__setattr__("max",10)


      But it didn't work. Am I missing something?










      share|improve this question













      I want to limit the number of items I find in each pages.



      I found this documentation that seems to fit what I need:



      class scrapy.contracts.default.ReturnsContract

      This contract (@returns) sets lower and upper bounds for the items and
      requests returned by the spider. The upper bound is optional:

      @returns item(s)|request(s) [min [max]]


      But I don't understand how to use this class. In my spider, I tried to add



      ReturnsContract.__setattr__("max",10)


      But it didn't work. Am I missing something?







      python scrapy web-crawler






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 19 at 17:45









      Mrtnchps

      11210




      11210
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          1
          down vote



          accepted










          The Spider Contracts are meant for testing purposes, not to control your data extraction logic.




          Testing spiders can get particularly annoying and while nothing
          prevents you from writing unit tests the task gets cumbersome quickly.
          Scrapy offers an integrated way of testing your spiders by the means
          of contracts.



          This allows you to test each callback of your spider by hardcoding a
          sample url and check various constraints for how the callback
          processes the response. Each contract is prefixed with an @ and
          included in the docstring.




          For your purpose, you can simply set an upper bound in your extraction logic, for example:



          response.xpath('//my/xpath').extract()[:10]






          share|improve this answer





















          • I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
            – Mrtnchps
            Nov 19 at 19:08








          • 1




            If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
            – Guillaume
            Nov 19 at 19:24











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380064%2fhow-to-set-the-upper-bound-on-a-scrapy-spider-returnscontract%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          1
          down vote



          accepted










          The Spider Contracts are meant for testing purposes, not to control your data extraction logic.




          Testing spiders can get particularly annoying and while nothing
          prevents you from writing unit tests the task gets cumbersome quickly.
          Scrapy offers an integrated way of testing your spiders by the means
          of contracts.



          This allows you to test each callback of your spider by hardcoding a
          sample url and check various constraints for how the callback
          processes the response. Each contract is prefixed with an @ and
          included in the docstring.




          For your purpose, you can simply set an upper bound in your extraction logic, for example:



          response.xpath('//my/xpath').extract()[:10]






          share|improve this answer





















          • I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
            – Mrtnchps
            Nov 19 at 19:08








          • 1




            If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
            – Guillaume
            Nov 19 at 19:24















          up vote
          1
          down vote



          accepted










          The Spider Contracts are meant for testing purposes, not to control your data extraction logic.




          Testing spiders can get particularly annoying and while nothing
          prevents you from writing unit tests the task gets cumbersome quickly.
          Scrapy offers an integrated way of testing your spiders by the means
          of contracts.



          This allows you to test each callback of your spider by hardcoding a
          sample url and check various constraints for how the callback
          processes the response. Each contract is prefixed with an @ and
          included in the docstring.




          For your purpose, you can simply set an upper bound in your extraction logic, for example:



          response.xpath('//my/xpath').extract()[:10]






          share|improve this answer





















          • I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
            – Mrtnchps
            Nov 19 at 19:08








          • 1




            If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
            – Guillaume
            Nov 19 at 19:24













          up vote
          1
          down vote



          accepted







          up vote
          1
          down vote



          accepted






          The Spider Contracts are meant for testing purposes, not to control your data extraction logic.




          Testing spiders can get particularly annoying and while nothing
          prevents you from writing unit tests the task gets cumbersome quickly.
          Scrapy offers an integrated way of testing your spiders by the means
          of contracts.



          This allows you to test each callback of your spider by hardcoding a
          sample url and check various constraints for how the callback
          processes the response. Each contract is prefixed with an @ and
          included in the docstring.




          For your purpose, you can simply set an upper bound in your extraction logic, for example:



          response.xpath('//my/xpath').extract()[:10]






          share|improve this answer












          The Spider Contracts are meant for testing purposes, not to control your data extraction logic.




          Testing spiders can get particularly annoying and while nothing
          prevents you from writing unit tests the task gets cumbersome quickly.
          Scrapy offers an integrated way of testing your spiders by the means
          of contracts.



          This allows you to test each callback of your spider by hardcoding a
          sample url and check various constraints for how the callback
          processes the response. Each contract is prefixed with an @ and
          included in the docstring.




          For your purpose, you can simply set an upper bound in your extraction logic, for example:



          response.xpath('//my/xpath').extract()[:10]







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 19 at 18:59









          Guillaume

          1,0931724




          1,0931724












          • I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
            – Mrtnchps
            Nov 19 at 19:08








          • 1




            If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
            – Guillaume
            Nov 19 at 19:24


















          • I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
            – Mrtnchps
            Nov 19 at 19:08








          • 1




            If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
            – Guillaume
            Nov 19 at 19:24
















          I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
          – Mrtnchps
          Nov 19 at 19:08






          I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
          – Mrtnchps
          Nov 19 at 19:08






          1




          1




          If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
          – Guillaume
          Nov 19 at 19:24




          If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
          – Guillaume
          Nov 19 at 19:24


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380064%2fhow-to-set-the-upper-bound-on-a-scrapy-spider-returnscontract%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

          Alcedinidae

          RAC Tourist Trophy