Why Is There No Speed-up When Using Pythons Multiprocessing For Embarassingly Parallel Problem Within A For-loop, With Shared Numpy Data?
Solution 1:
Every time you create a Process you are creating a new process. If you're doing that within your for loop, then yes, you are starting new processes every time through the loop. It sounds like what you want to do is initialize your Queue and Processes outside of the loop, then fill the Queue inside the loop.
I've used multiprocessing.Pool before, and it's useful, but it doesn't offer much over what you've already implemented with a Queue.
Solution 2:
Eventually, this all boils down to one question: Is it possible to start processes outside of the main for-loop, and for every iteration, feed the updated variables in them, have them processing the data, and collecting the newly calculated data from all of the processes, without having to start new processes every iteration?
Post a Comment for "Why Is There No Speed-up When Using Pythons Multiprocessing For Embarassingly Parallel Problem Within A For-loop, With Shared Numpy Data?"