深圳全飞鸿

 找回密码
 立即注册
搜索
热搜: 活动 交友 discuz
查看: 619|回复: 0
打印 上一主题 下一主题

python抓取网页资源的方法验证

[复制链接]

800

主题

1379

帖子

7704

积分

版主

Rank: 7Rank: 7Rank: 7

积分
7704
跳转到指定楼层
楼主
发表于 2019-7-15 14:14:03 | 只看该作者 回帖奖励 |倒序浏览 |阅读模式
1、最简单

  1. import urllib.request
  2. response = urllib.request.urlopen('http://python.org/')
  3. html = response.read()
复制代码



2、使用 Request

  1. import urllib.request

  2. req = urllib.request.Request('http://python.org/')
  3. response = urllib.request.urlopen(req)
  4. the_page = response.read()
复制代码


3、发送数据

  1. #! /usr/bin/env python3

  2. import urllib.parse
  3. import urllib.request

  4. url = 'http://localhost/login.php'
  5. user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
  6. values = {
  7. 'act' : 'login',
  8. 'login[email]' : 'yzhang@i9i8.com',
  9. 'login[password]' : '123456'
  10.          }

  11. data = urllib.parse.urlencode(values)
  12. req = urllib.request.Request(url, data)
  13. req.add_header('Referer', 'http://www.python.org/')
  14. response = urllib.request.urlopen(req)
  15. the_page = response.read()

  16. print(the_page.decode("utf8"))
复制代码


4、发送数据和header

  1. #! /usr/bin/env python3

  2. import urllib.parse
  3. import urllib.request

  4. url = 'http://localhost/login.php'
  5. user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'
  6. values = {
  7. 'act' : 'login',
  8. 'login[email]' : 'yzhang@i9i8.com',
  9. 'login[password]' : '123456'
  10.          }
  11. headers = { 'User-Agent' : user_agent }

  12. data = urllib.parse.urlencode(values)
  13. req = urllib.request.Request(url, data, headers)
  14. response = urllib.request.urlopen(req)
  15. the_page = response.read()

  16. print(the_page.decode("utf8"))
复制代码


5、http 错误

  1. #! /usr/bin/env python3

  2. import urllib.request

  3. req = urllib.request.Request('http://www.python.org/fish.html')
  4. try:
  5.     urllib.request.urlopen(req)
  6. except urllib.error.HTTPError as e:
  7. print(e.code)
  8. print(e.read().decode("utf8"))
复制代码


6、异常处理1

  1. #! /usr/bin/env python3

  2. from urllib.request import Request, urlopen
  3. from urllib.error import URLError, HTTPError
  4. req = Request("http://twitter.com/")
  5. try:
  6.     response = urlopen(req)
  7. except HTTPError as e:
  8. print('The server couldn\'t fulfill the request.')
  9. print('Error code: ', e.code)
  10. except URLError as e:
  11. print('We failed to reach a server.')
  12. print('Reason: ', e.reason)
  13. else:
  14. print("good!")
  15. print(response.read().decode("utf8"))
复制代码


7、异常处理2

  1. #! /usr/bin/env python3

  2. from urllib.request import Request, urlopen
  3. from urllib.error import  URLError
  4. req = Request("http://twitter.com/")
  5. try:
  6.     response = urlopen(req)
  7. except URLError as e:
  8. if hasattr(e, 'reason'):
  9. print('We failed to reach a server.')
  10. print('Reason: ', e.reason)
  11. elif hasattr(e, 'code'):
  12. print('The server couldn\'t fulfill the request.')
  13. print('Error code: ', e.code)
  14. else:
  15. print("good!")
  16. print(response.read().decode("utf8"))
复制代码


8、HTTP 认证

  1. #! /usr/bin/env python3

  2. import urllib.request

  3. # create a password manager
  4. password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()

  5. # Add the username and password.
  6. # If we knew the realm, we could use it instead of None.
  7. top_level_url = "https://cms.tetx.com/"
  8. password_mgr.add_password(None, top_level_url, 'yzhang', 'cccddd')

  9. handler = urllib.request.HTTPBasicAuthHandler(password_mgr)

  10. # create "opener" (OpenerDirector instance)
  11. opener = urllib.request.build_opener(handler)

  12. # use the opener to fetch a URL
  13. a_url = "https://cms.tetx.com/"
  14. x = opener.open(a_url)
  15. print(x.read())

  16. # Install the opener.
  17. # Now all calls to urllib.request.urlopen use our opener.
  18. urllib.request.install_opener(opener)

  19. a = urllib.request.urlopen(a_url).read().decode('utf8')
  20. print(a)
复制代码



9、使用代理

  1. #! /usr/bin/env python3

  2. import urllib.request

  3. proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'})
  4. opener = urllib.request.build_opener(proxy_support)
  5. urllib.request.install_opener(opener)


  6. a = urllib.request.urlopen("http://g.cn").read().decode("utf8")
  7. print(a)
复制代码


10、超时

  1. #! /usr/bin/env python3

  2. import socket
  3. import urllib.request

  4. # timeout in seconds
  5. timeout = 2
  6. socket.setdefaulttimeout(timeout)

  7. # this call to urllib.request.urlopen now uses the default timeout
  8. # we have set in the socket module
  9. req = urllib.request.Request('http://twitter.com/')
  10. a = urllib.request.urlopen(req).read()
  11. print(a)
复制代码

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

QQ|Archiver|手机版|小黑屋|nagomes  

GMT+8, 2025-5-5 04:45 , Processed in 0.023492 second(s), 21 queries .

Powered by Discuz! X3.2

© 2001-2013 Comsenz Inc.

快速回复 返回顶部 返回列表