How do I get hrefs from hrefs?












2















How do I get hrefs from hrefs using Python in class and method format?
I have tried:



root_url = 'https://www.iea.org'

class IEAData:
def __init__(self):
try:--
except:


def get_links(self, url):
all_links =
page = requests.get(root_url)
soup = BeautifulSoup(page.text, 'html.parser')
for href in soup.find_all(class_='omrlist'):
all_links.append(root_url + href.find('a').get('href'))
return all_links
#print(all_links)

iea_obj = IEAData()
yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

reportLinks =

for url in yearLinks:
links =iea_obj.get_links(yearLinks)
print(links)


Recommended: links variable must have all month hrefs but not getting, so please tell me how I should do it.










share|improve this question

























  • What's the issue here? Are you getting errors? If so, which ones? What I can see right away is that you're calling iea_obj.get_links(yearLinks) in your last loop, where yearLinks is a list, but the function expects its argument to be a string. I think you meant to do links =iea_obj.get_links(url).

    – ForceBru
    Nov 23 '18 at 9:59











  • In class and method format of python i need to parse all links, which is present in the hrefs i.e if you hit years href then you get months href , but in the class and method format

    – user7917919
    Nov 23 '18 at 10:08
















2















How do I get hrefs from hrefs using Python in class and method format?
I have tried:



root_url = 'https://www.iea.org'

class IEAData:
def __init__(self):
try:--
except:


def get_links(self, url):
all_links =
page = requests.get(root_url)
soup = BeautifulSoup(page.text, 'html.parser')
for href in soup.find_all(class_='omrlist'):
all_links.append(root_url + href.find('a').get('href'))
return all_links
#print(all_links)

iea_obj = IEAData()
yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

reportLinks =

for url in yearLinks:
links =iea_obj.get_links(yearLinks)
print(links)


Recommended: links variable must have all month hrefs but not getting, so please tell me how I should do it.










share|improve this question

























  • What's the issue here? Are you getting errors? If so, which ones? What I can see right away is that you're calling iea_obj.get_links(yearLinks) in your last loop, where yearLinks is a list, but the function expects its argument to be a string. I think you meant to do links =iea_obj.get_links(url).

    – ForceBru
    Nov 23 '18 at 9:59











  • In class and method format of python i need to parse all links, which is present in the hrefs i.e if you hit years href then you get months href , but in the class and method format

    – user7917919
    Nov 23 '18 at 10:08














2












2








2








How do I get hrefs from hrefs using Python in class and method format?
I have tried:



root_url = 'https://www.iea.org'

class IEAData:
def __init__(self):
try:--
except:


def get_links(self, url):
all_links =
page = requests.get(root_url)
soup = BeautifulSoup(page.text, 'html.parser')
for href in soup.find_all(class_='omrlist'):
all_links.append(root_url + href.find('a').get('href'))
return all_links
#print(all_links)

iea_obj = IEAData()
yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

reportLinks =

for url in yearLinks:
links =iea_obj.get_links(yearLinks)
print(links)


Recommended: links variable must have all month hrefs but not getting, so please tell me how I should do it.










share|improve this question
















How do I get hrefs from hrefs using Python in class and method format?
I have tried:



root_url = 'https://www.iea.org'

class IEAData:
def __init__(self):
try:--
except:


def get_links(self, url):
all_links =
page = requests.get(root_url)
soup = BeautifulSoup(page.text, 'html.parser')
for href in soup.find_all(class_='omrlist'):
all_links.append(root_url + href.find('a').get('href'))
return all_links
#print(all_links)

iea_obj = IEAData()
yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

reportLinks =

for url in yearLinks:
links =iea_obj.get_links(yearLinks)
print(links)


Recommended: links variable must have all month hrefs but not getting, so please tell me how I should do it.







python web-scraping beautifulsoup






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 23 '18 at 11:00









Martin Evans

28.6k133256




28.6k133256










asked Nov 23 '18 at 9:54









user7917919user7917919

144




144













  • What's the issue here? Are you getting errors? If so, which ones? What I can see right away is that you're calling iea_obj.get_links(yearLinks) in your last loop, where yearLinks is a list, but the function expects its argument to be a string. I think you meant to do links =iea_obj.get_links(url).

    – ForceBru
    Nov 23 '18 at 9:59











  • In class and method format of python i need to parse all links, which is present in the hrefs i.e if you hit years href then you get months href , but in the class and method format

    – user7917919
    Nov 23 '18 at 10:08



















  • What's the issue here? Are you getting errors? If so, which ones? What I can see right away is that you're calling iea_obj.get_links(yearLinks) in your last loop, where yearLinks is a list, but the function expects its argument to be a string. I think you meant to do links =iea_obj.get_links(url).

    – ForceBru
    Nov 23 '18 at 9:59











  • In class and method format of python i need to parse all links, which is present in the hrefs i.e if you hit years href then you get months href , but in the class and method format

    – user7917919
    Nov 23 '18 at 10:08

















What's the issue here? Are you getting errors? If so, which ones? What I can see right away is that you're calling iea_obj.get_links(yearLinks) in your last loop, where yearLinks is a list, but the function expects its argument to be a string. I think you meant to do links =iea_obj.get_links(url).

– ForceBru
Nov 23 '18 at 9:59





What's the issue here? Are you getting errors? If so, which ones? What I can see right away is that you're calling iea_obj.get_links(yearLinks) in your last loop, where yearLinks is a list, but the function expects its argument to be a string. I think you meant to do links =iea_obj.get_links(url).

– ForceBru
Nov 23 '18 at 9:59













In class and method format of python i need to parse all links, which is present in the hrefs i.e if you hit years href then you get months href , but in the class and method format

– user7917919
Nov 23 '18 at 10:08





In class and method format of python i need to parse all links, which is present in the hrefs i.e if you hit years href then you get months href , but in the class and method format

– user7917919
Nov 23 '18 at 10:08












2 Answers
2






active

oldest

votes


















0














There were a couple of issues with your code. Your get_links() function was not using the url that was passed to it. When looping over the returned links, you were passing yearLinks rather than the url.



The following should get you going:



from bs4 import BeautifulSoup                        
import requests

root_url = 'https://www.iea.org'

class IEAData:
def get_links(self, url):
all_links =
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')

for li in soup.find_all(class_='omrlist'):
all_links.append(root_url + li.find('a').get('href'))
return all_links

iea_obj = IEAData()
yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

for url in yearLinks:
links = iea_obj.get_links(url)
print(url, links)


This would give you output starting:



https://www.iea.org/oilmarketreport/reports/2018/ ['https://www.iea.org/oilmarketreport/reports/2018/0118/', 'https://www.iea.org/oilmarketreport/reports/2018/0218/', 'https://www.iea.org/oilmarketreport/reports/2018/0318/', 'https://www.iea.org/oilmarketreport/reports/2018/0418/', 'https://www.iea.org/oilmarketreport/reports/2018/0518/', 'https://www.iea.org/oilmarketreport/reports/2018/0618/', 'https://www.iea.org/oilmarketreport/reports/2018/0718/', 'https://www.iea.org/oilmarketreport/reports/2018/0818/', 'https://www.iea.org/oilmarketreport/reports/2018/1018/']
https://www.iea.org/oilmarketreport/reports/2017/ ['https://www.iea.org/oilmarketreport/reports/2017/0117/', 'https://www.iea.org/oilmarketreport/reports/2017/0217/', 'https://www.iea.org/oilmarketreport/reports/2017/0317/', 'https://www.iea.org/oilmarketreport/reports/2017/0417/', 'https://www.iea.org/oilmarketreport/reports/2017/0517/', 'https://www.iea.org/oilmarketreport/reports/2017/0617/', 'https://www.iea.org/oilmarketreport/reports/2017/0717/', 'https://www.iea.org/oilmarketreport/reports/2017/0817/', 'https://www.iea.org/oilmarketreport/reports/2017/0917/', 'https://www.iea.org/oilmarketreport/reports/2017/1017/', 'https://www.iea.org/oilmarketreport/reports/2017/1117/', 'https://www.iea.org/oilmarketreport/reports/2017/1217/']





share|improve this answer































    0














    I'm fairly new to programming, and I'm still learning and trying to understand how classes and whatnot all work together. But gave it a shot (that's how we learn, right?)



    Not sure if this is what you're looking for as your output. I changed 2 things and was able to put all the links from within the yearLinks into a list. Note that it'll also include the PDF links as well as the months links that I think you wanted. If you don't want those PDF links, and exclusively the months, then just don't include the pdf.



    So here's the code I did it with, and maybe you can use that to fit into how you have it structured.



    root_url = 'https://www.iea.org'


    class IEAData:

    def get_links(self, url):

    all_links =
    page = requests.get(url)
    soup = bs4.BeautifulSoup(page.text, 'html.parser')
    for href in soup.find_all(class_='omrlist'):
    all_links.append(root_url + href.find('a').get('href'))
    return all_links
    #print(all_links)


    iea_obj = IEAData()
    yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

    reportLinks =

    for url in yearLinks:
    links = iea_obj.get_links(url)

    # uncomment line below if you do not want the .pdf links
    #links = [ x for x in links if ".pdf" not in x ]
    reportLinks += links





    share|improve this answer
























      Your Answer






      StackExchange.ifUsing("editor", function () {
      StackExchange.using("externalEditor", function () {
      StackExchange.using("snippets", function () {
      StackExchange.snippets.init();
      });
      });
      }, "code-snippets");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "1"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53444325%2fhow-do-i-get-hrefs-from-hrefs%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      2 Answers
      2






      active

      oldest

      votes








      2 Answers
      2






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      0














      There were a couple of issues with your code. Your get_links() function was not using the url that was passed to it. When looping over the returned links, you were passing yearLinks rather than the url.



      The following should get you going:



      from bs4 import BeautifulSoup                        
      import requests

      root_url = 'https://www.iea.org'

      class IEAData:
      def get_links(self, url):
      all_links =
      page = requests.get(url)
      soup = BeautifulSoup(page.text, 'html.parser')

      for li in soup.find_all(class_='omrlist'):
      all_links.append(root_url + li.find('a').get('href'))
      return all_links

      iea_obj = IEAData()
      yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

      for url in yearLinks:
      links = iea_obj.get_links(url)
      print(url, links)


      This would give you output starting:



      https://www.iea.org/oilmarketreport/reports/2018/ ['https://www.iea.org/oilmarketreport/reports/2018/0118/', 'https://www.iea.org/oilmarketreport/reports/2018/0218/', 'https://www.iea.org/oilmarketreport/reports/2018/0318/', 'https://www.iea.org/oilmarketreport/reports/2018/0418/', 'https://www.iea.org/oilmarketreport/reports/2018/0518/', 'https://www.iea.org/oilmarketreport/reports/2018/0618/', 'https://www.iea.org/oilmarketreport/reports/2018/0718/', 'https://www.iea.org/oilmarketreport/reports/2018/0818/', 'https://www.iea.org/oilmarketreport/reports/2018/1018/']
      https://www.iea.org/oilmarketreport/reports/2017/ ['https://www.iea.org/oilmarketreport/reports/2017/0117/', 'https://www.iea.org/oilmarketreport/reports/2017/0217/', 'https://www.iea.org/oilmarketreport/reports/2017/0317/', 'https://www.iea.org/oilmarketreport/reports/2017/0417/', 'https://www.iea.org/oilmarketreport/reports/2017/0517/', 'https://www.iea.org/oilmarketreport/reports/2017/0617/', 'https://www.iea.org/oilmarketreport/reports/2017/0717/', 'https://www.iea.org/oilmarketreport/reports/2017/0817/', 'https://www.iea.org/oilmarketreport/reports/2017/0917/', 'https://www.iea.org/oilmarketreport/reports/2017/1017/', 'https://www.iea.org/oilmarketreport/reports/2017/1117/', 'https://www.iea.org/oilmarketreport/reports/2017/1217/']





      share|improve this answer




























        0














        There were a couple of issues with your code. Your get_links() function was not using the url that was passed to it. When looping over the returned links, you were passing yearLinks rather than the url.



        The following should get you going:



        from bs4 import BeautifulSoup                        
        import requests

        root_url = 'https://www.iea.org'

        class IEAData:
        def get_links(self, url):
        all_links =
        page = requests.get(url)
        soup = BeautifulSoup(page.text, 'html.parser')

        for li in soup.find_all(class_='omrlist'):
        all_links.append(root_url + li.find('a').get('href'))
        return all_links

        iea_obj = IEAData()
        yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

        for url in yearLinks:
        links = iea_obj.get_links(url)
        print(url, links)


        This would give you output starting:



        https://www.iea.org/oilmarketreport/reports/2018/ ['https://www.iea.org/oilmarketreport/reports/2018/0118/', 'https://www.iea.org/oilmarketreport/reports/2018/0218/', 'https://www.iea.org/oilmarketreport/reports/2018/0318/', 'https://www.iea.org/oilmarketreport/reports/2018/0418/', 'https://www.iea.org/oilmarketreport/reports/2018/0518/', 'https://www.iea.org/oilmarketreport/reports/2018/0618/', 'https://www.iea.org/oilmarketreport/reports/2018/0718/', 'https://www.iea.org/oilmarketreport/reports/2018/0818/', 'https://www.iea.org/oilmarketreport/reports/2018/1018/']
        https://www.iea.org/oilmarketreport/reports/2017/ ['https://www.iea.org/oilmarketreport/reports/2017/0117/', 'https://www.iea.org/oilmarketreport/reports/2017/0217/', 'https://www.iea.org/oilmarketreport/reports/2017/0317/', 'https://www.iea.org/oilmarketreport/reports/2017/0417/', 'https://www.iea.org/oilmarketreport/reports/2017/0517/', 'https://www.iea.org/oilmarketreport/reports/2017/0617/', 'https://www.iea.org/oilmarketreport/reports/2017/0717/', 'https://www.iea.org/oilmarketreport/reports/2017/0817/', 'https://www.iea.org/oilmarketreport/reports/2017/0917/', 'https://www.iea.org/oilmarketreport/reports/2017/1017/', 'https://www.iea.org/oilmarketreport/reports/2017/1117/', 'https://www.iea.org/oilmarketreport/reports/2017/1217/']





        share|improve this answer


























          0












          0








          0







          There were a couple of issues with your code. Your get_links() function was not using the url that was passed to it. When looping over the returned links, you were passing yearLinks rather than the url.



          The following should get you going:



          from bs4 import BeautifulSoup                        
          import requests

          root_url = 'https://www.iea.org'

          class IEAData:
          def get_links(self, url):
          all_links =
          page = requests.get(url)
          soup = BeautifulSoup(page.text, 'html.parser')

          for li in soup.find_all(class_='omrlist'):
          all_links.append(root_url + li.find('a').get('href'))
          return all_links

          iea_obj = IEAData()
          yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

          for url in yearLinks:
          links = iea_obj.get_links(url)
          print(url, links)


          This would give you output starting:



          https://www.iea.org/oilmarketreport/reports/2018/ ['https://www.iea.org/oilmarketreport/reports/2018/0118/', 'https://www.iea.org/oilmarketreport/reports/2018/0218/', 'https://www.iea.org/oilmarketreport/reports/2018/0318/', 'https://www.iea.org/oilmarketreport/reports/2018/0418/', 'https://www.iea.org/oilmarketreport/reports/2018/0518/', 'https://www.iea.org/oilmarketreport/reports/2018/0618/', 'https://www.iea.org/oilmarketreport/reports/2018/0718/', 'https://www.iea.org/oilmarketreport/reports/2018/0818/', 'https://www.iea.org/oilmarketreport/reports/2018/1018/']
          https://www.iea.org/oilmarketreport/reports/2017/ ['https://www.iea.org/oilmarketreport/reports/2017/0117/', 'https://www.iea.org/oilmarketreport/reports/2017/0217/', 'https://www.iea.org/oilmarketreport/reports/2017/0317/', 'https://www.iea.org/oilmarketreport/reports/2017/0417/', 'https://www.iea.org/oilmarketreport/reports/2017/0517/', 'https://www.iea.org/oilmarketreport/reports/2017/0617/', 'https://www.iea.org/oilmarketreport/reports/2017/0717/', 'https://www.iea.org/oilmarketreport/reports/2017/0817/', 'https://www.iea.org/oilmarketreport/reports/2017/0917/', 'https://www.iea.org/oilmarketreport/reports/2017/1017/', 'https://www.iea.org/oilmarketreport/reports/2017/1117/', 'https://www.iea.org/oilmarketreport/reports/2017/1217/']





          share|improve this answer













          There were a couple of issues with your code. Your get_links() function was not using the url that was passed to it. When looping over the returned links, you were passing yearLinks rather than the url.



          The following should get you going:



          from bs4 import BeautifulSoup                        
          import requests

          root_url = 'https://www.iea.org'

          class IEAData:
          def get_links(self, url):
          all_links =
          page = requests.get(url)
          soup = BeautifulSoup(page.text, 'html.parser')

          for li in soup.find_all(class_='omrlist'):
          all_links.append(root_url + li.find('a').get('href'))
          return all_links

          iea_obj = IEAData()
          yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

          for url in yearLinks:
          links = iea_obj.get_links(url)
          print(url, links)


          This would give you output starting:



          https://www.iea.org/oilmarketreport/reports/2018/ ['https://www.iea.org/oilmarketreport/reports/2018/0118/', 'https://www.iea.org/oilmarketreport/reports/2018/0218/', 'https://www.iea.org/oilmarketreport/reports/2018/0318/', 'https://www.iea.org/oilmarketreport/reports/2018/0418/', 'https://www.iea.org/oilmarketreport/reports/2018/0518/', 'https://www.iea.org/oilmarketreport/reports/2018/0618/', 'https://www.iea.org/oilmarketreport/reports/2018/0718/', 'https://www.iea.org/oilmarketreport/reports/2018/0818/', 'https://www.iea.org/oilmarketreport/reports/2018/1018/']
          https://www.iea.org/oilmarketreport/reports/2017/ ['https://www.iea.org/oilmarketreport/reports/2017/0117/', 'https://www.iea.org/oilmarketreport/reports/2017/0217/', 'https://www.iea.org/oilmarketreport/reports/2017/0317/', 'https://www.iea.org/oilmarketreport/reports/2017/0417/', 'https://www.iea.org/oilmarketreport/reports/2017/0517/', 'https://www.iea.org/oilmarketreport/reports/2017/0617/', 'https://www.iea.org/oilmarketreport/reports/2017/0717/', 'https://www.iea.org/oilmarketreport/reports/2017/0817/', 'https://www.iea.org/oilmarketreport/reports/2017/0917/', 'https://www.iea.org/oilmarketreport/reports/2017/1017/', 'https://www.iea.org/oilmarketreport/reports/2017/1117/', 'https://www.iea.org/oilmarketreport/reports/2017/1217/']






          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 23 '18 at 10:53









          Martin EvansMartin Evans

          28.6k133256




          28.6k133256

























              0














              I'm fairly new to programming, and I'm still learning and trying to understand how classes and whatnot all work together. But gave it a shot (that's how we learn, right?)



              Not sure if this is what you're looking for as your output. I changed 2 things and was able to put all the links from within the yearLinks into a list. Note that it'll also include the PDF links as well as the months links that I think you wanted. If you don't want those PDF links, and exclusively the months, then just don't include the pdf.



              So here's the code I did it with, and maybe you can use that to fit into how you have it structured.



              root_url = 'https://www.iea.org'


              class IEAData:

              def get_links(self, url):

              all_links =
              page = requests.get(url)
              soup = bs4.BeautifulSoup(page.text, 'html.parser')
              for href in soup.find_all(class_='omrlist'):
              all_links.append(root_url + href.find('a').get('href'))
              return all_links
              #print(all_links)


              iea_obj = IEAData()
              yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

              reportLinks =

              for url in yearLinks:
              links = iea_obj.get_links(url)

              # uncomment line below if you do not want the .pdf links
              #links = [ x for x in links if ".pdf" not in x ]
              reportLinks += links





              share|improve this answer




























                0














                I'm fairly new to programming, and I'm still learning and trying to understand how classes and whatnot all work together. But gave it a shot (that's how we learn, right?)



                Not sure if this is what you're looking for as your output. I changed 2 things and was able to put all the links from within the yearLinks into a list. Note that it'll also include the PDF links as well as the months links that I think you wanted. If you don't want those PDF links, and exclusively the months, then just don't include the pdf.



                So here's the code I did it with, and maybe you can use that to fit into how you have it structured.



                root_url = 'https://www.iea.org'


                class IEAData:

                def get_links(self, url):

                all_links =
                page = requests.get(url)
                soup = bs4.BeautifulSoup(page.text, 'html.parser')
                for href in soup.find_all(class_='omrlist'):
                all_links.append(root_url + href.find('a').get('href'))
                return all_links
                #print(all_links)


                iea_obj = IEAData()
                yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

                reportLinks =

                for url in yearLinks:
                links = iea_obj.get_links(url)

                # uncomment line below if you do not want the .pdf links
                #links = [ x for x in links if ".pdf" not in x ]
                reportLinks += links





                share|improve this answer


























                  0












                  0








                  0







                  I'm fairly new to programming, and I'm still learning and trying to understand how classes and whatnot all work together. But gave it a shot (that's how we learn, right?)



                  Not sure if this is what you're looking for as your output. I changed 2 things and was able to put all the links from within the yearLinks into a list. Note that it'll also include the PDF links as well as the months links that I think you wanted. If you don't want those PDF links, and exclusively the months, then just don't include the pdf.



                  So here's the code I did it with, and maybe you can use that to fit into how you have it structured.



                  root_url = 'https://www.iea.org'


                  class IEAData:

                  def get_links(self, url):

                  all_links =
                  page = requests.get(url)
                  soup = bs4.BeautifulSoup(page.text, 'html.parser')
                  for href in soup.find_all(class_='omrlist'):
                  all_links.append(root_url + href.find('a').get('href'))
                  return all_links
                  #print(all_links)


                  iea_obj = IEAData()
                  yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

                  reportLinks =

                  for url in yearLinks:
                  links = iea_obj.get_links(url)

                  # uncomment line below if you do not want the .pdf links
                  #links = [ x for x in links if ".pdf" not in x ]
                  reportLinks += links





                  share|improve this answer













                  I'm fairly new to programming, and I'm still learning and trying to understand how classes and whatnot all work together. But gave it a shot (that's how we learn, right?)



                  Not sure if this is what you're looking for as your output. I changed 2 things and was able to put all the links from within the yearLinks into a list. Note that it'll also include the PDF links as well as the months links that I think you wanted. If you don't want those PDF links, and exclusively the months, then just don't include the pdf.



                  So here's the code I did it with, and maybe you can use that to fit into how you have it structured.



                  root_url = 'https://www.iea.org'


                  class IEAData:

                  def get_links(self, url):

                  all_links =
                  page = requests.get(url)
                  soup = bs4.BeautifulSoup(page.text, 'html.parser')
                  for href in soup.find_all(class_='omrlist'):
                  all_links.append(root_url + href.find('a').get('href'))
                  return all_links
                  #print(all_links)


                  iea_obj = IEAData()
                  yearLinks = iea_obj.get_links(root_url + '/oilmarketreport/reports/')

                  reportLinks =

                  for url in yearLinks:
                  links = iea_obj.get_links(url)

                  # uncomment line below if you do not want the .pdf links
                  #links = [ x for x in links if ".pdf" not in x ]
                  reportLinks += links






                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Nov 23 '18 at 11:05









                  chitown88chitown88

                  5,6431627




                  5,6431627






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53444325%2fhow-do-i-get-hrefs-from-hrefs%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      "Incorrect syntax near the keyword 'ON'. (on update cascade, on delete cascade,)

                      Alcedinidae

                      RAC Tourist Trophy