How to use Multithreading in Python -


i need check whether url's responding or not.if url(s) not responding need display that.here don't want wait 1 one checking , display.for reason want use multi threading concept.here's how use multi-threading make use of code in efficient way.

import threading,urllib2 import time,pymongo,smtplib urllib2 import urlopen,urlerror socket import socket threading import thread res = {"ftp":'ftp://ftp.funet.fi/pub/standards/rfc/rfc959.txt',"tcp":'devio.us:22',"smtp":'http://smtp.gmail.com',"http":"http://www.amazon.com"} def allurls():     try:         if 'http' in res.keys():             http_test(res["http"])             get_threads(res["http"])         if 'tcp' in res.keys():             tcp_test(res["tcp"])         if 'ftp' in res.keys():             ftp_test(res["ftp"])         if 'smtp' in res.keys():             smtp_test(res["smtp"])     except pymongo.errors.connectionfailure, e:         print "could not connect mongodb: %s" % e  def tcp_test(server_info):     cpos = server_info.find(':')     try:         sock = socket()         sock.connect((server_info[:cpos], int(server_info[cpos+1:])))         sock.close         print (server_info + " \t\tresponding ")     except exception e:         print str(e) def http_test(server_info):     try:         data = urlopen(server_info)         print (server_info + " \t\tresponding "),data.code         fetchurl(server_info).start()     except exception e:         print str(e) def ftp_test(server_info):     try:         data = urlopen(server_info)         print (server_info+"  -  responding "),data.code     except exception e:         print str(e) def smtp_test(server_info):    try:       conn = smtplib.smtp("smtp.gmail.com",587, timeout=10)       try:          status = conn.noop()[0]       except:          status = -1       if status == 250:          print server_info+ " \t\t responding "       else:          print "not responding"    except:       print "something wrong in url"  start = time.time() class fetchurl(threading.thread):     def __init__(self, url):         threading.thread.__init__(self)         self.setdaemon = true         self.url = url     def run(self):         urlhandler = urllib2.urlopen(self.url)         html = urlhandler.read()         finished_fetch_url(self.url) def finished_fetch_url(url):     print "\"%s\" \tfetched in %ss" % (url,(time.time() - start))  def crawl(url):     data = urllib2.urlopen(url).read()     print (url +" \t\treading") def get_threads(url):     # 5 threads. need pass urls here.     thread = threading.thread(target=crawl(url))     thread.start()     thread.join()     print "threads elapsed time: \t\t%s " % (time.time() - start) 

python not designed multithreaded. in fact, there global interpreter lock (gil) baked python makes true multithreading difficult vanilla libraries.

that not impossible though; can use other libraries work around gil. easiest (and applicable) situation gevent. don't know exact performance requirements , don't have benchmarks @ hand recommend gevent approach follow can check them out on own:

  • you can monkey patch script. monkey patching makes vanilla libraries work gevent. takes least effort.
  • you can rewrite script using gevent-based networking/http libraries.

again, i've no data tell better i'd given situation.


Comments

Popular posts from this blog

javascript - Karma not able to start PhantomJS on Windows - Error: spawn UNKNOWN -

Nuget pack csproj using nuspec -

c# - Display ASPX Popup control in RowDeleteing Event (ASPX Gridview) -