python - multiprocessing queue full -


i'm using concurrent.futures implement multiprocessing. getting queue.full error, odd because assigning 10 jobs.

a_list = [np.random.rand(2000, 2000) in range(10)]  processpoolexecutor() pool:     pool.map(np.linalg.svd, a_list) 

error:

exception in thread thread-9: traceback (most recent call last):   file "/library/frameworks/python.framework/versions/3.4/lib/python3.4/threading.py", line 921, in _bootstrap_inner     self.run()   file "/library/frameworks/python.framework/versions/3.4/lib/python3.4/threading.py", line 869, in run     self._target(*self._args, **self._kwargs)   file "/library/frameworks/python.framework/versions/3.4/lib/python3.4/concurrent/futures/process.py", line 251, in _queue_management_worker     shutdown_worker()   file "/library/frameworks/python.framework/versions/3.4/lib/python3.4/concurrent/futures/process.py", line 209, in shutdown_worker     call_queue.put_nowait(none)   file "/library/frameworks/python.framework/versions/3.4/lib/python3.4/multiprocessing/queues.py", line 131, in put_nowait     return self.put(obj, false)   file "/library/frameworks/python.framework/versions/3.4/lib/python3.4/multiprocessing/queues.py", line 82, in put     raise full queue.full 

short answer
believe pipe size limits underlying cause. there isn't can except break data smaller chunks , deal them iteratively. means may need find new algorithm can work on small portions of 2000x2000 array @ time find singular value composition.

details
let's 1 thing straight right away: you're dealing lot of information. because you're working ten items doesn't mean it's trivial. each of items 2000x2000 array full of 4,000,000 floats, 64 bits each, you're looking @ around 244mb per array, plus other data tags along in numpy's ndarrays.

the processpoolexecutor works launching separate thread manage worker processes. management thread uses multiprocesing.queue pass jobs workers, called _call_queue. these multiprocessing.queues fancy wrappers around pipes, , ndarrays you're trying pass workers large pipes handle properly.

reading on python issue 8426 shows figuring out how big pipes can difficult, when can nominal pipe size limit os. there many variables make simple. order things pulled off of queue can induce race conditions in underlying pipe trigger odd errors.

i suspect 1 of workers getting getting incomplete or corrupted object off of _call_queue, because queue's pipe full of giant objects. worker dies in unclean way, , work queue manager detects failure, gives on work , tells remaining workers exit. passing them poison pills on _call_queue, still full of giant ndarrays. why got full queue exception - data filled queue, management thread tried use same queue pass control messages other workers.

i think classic example of potential dangers of mixing data , control flows between different entities in program. large data not blocked more data being received workers, blocked manager's control communications workers because use same path.

i haven't been able recreate failure, can't sure of correct. fact can make code work 200x200 array (~2.5mb) seems support theory. nominal pipe size limits seem measured in kb or few mb @ most, depending on os , architecture. fact amount of data can through pipes isn't surprising, when consider not of 2.5mb needs fit in pipe @ once if consumer continuously receiving data. suggests reasonable upper bound on amount of data serially through pipe.


Comments

  1. Python - Multiprocessing Queue Full - >>>>> Download Now

    >>>>> Download Full

    Python - Multiprocessing Queue Full - >>>>> Download LINK

    >>>>> Download Now

    Python - Multiprocessing Queue Full - >>>>> Download Full

    >>>>> Download LINK z8

    ReplyDelete

Post a Comment

Popular posts from this blog

c# - Store DBContext Log in other EF table -

javascript - Karma not able to start PhantomJS on Windows - Error: spawn UNKNOWN -

Nuget pack csproj using nuspec -