InvalidSchema("No Connection Adapters Were Found For '%s'" % Url)
I was able to gather data from a web page using this import requests import lxml.html import re url = 'http://animesora.com/flying-witch-episode-7-english-subtitle/' r = requests.
Solution 1:
You are passing in the whole list:
for link2 in down:
r2 = requests.get(down)
Note how you passed in down
, not link2
. down
is a list, not a single URL string.
Pass in link2
:
for link2 in down:
r2 = requests.get(link2)
I'm not sure why you are using regular expressions. In the loop
for link in dom.xpath('//div[@class="downloadarea"]//a/@href'):
each link
is already a fully qualified URL:
>>> for link in dom.xpath('//div[@class="downloadarea"]//a/@href'):
... print link
...
https://link.safelinkconverter.com/review.php?id=aHR0cDovLygqKC5fKC9FZEk2Qg==&c=1&user=51757
https://link.safelinkconverter.com/review.php?id=aHR0cDovLygqKC5fKC95Tmg2Qg==&c=1&user=51757
https://link.safelinkconverter.com/review.php?id=aHR0cDovLygqKC5fKC93dFBmVFg=&c=1&user=51757
https://link.safelinkconverter.com/review.php?id=aHR0cDovLygqKC5fKC9zTGZYZ0s=&c=1&user=51757
You don't need to do any further processing on that.
Your remaining code has more errors; you confused r2.url
with r2.content
, and forgot the .xpath
part in your dom2.xpath(...)
query.
Post a Comment for "InvalidSchema("No Connection Adapters Were Found For '%s'" % Url)"