More CPUs doen't equal more speed
You really need to give more info on what you're doing in doit() to know what's going on. Are you using subprocess, threading, multiprocessing, etc?
Going off of what you've put there those nested for loops are being run in the 1 main thread. If doit() kicks off a program and doesn't wait for it to finish, then you're just instantly starting 1,200 versions of the external program. If doit() _does_ wait for it to finish then you're not doing anything different than 1,200 one-at-a-time calls with no parallelization.
How are you making sure you have CPU_COUNT versions running, only that many running, and kicking off the next one once any of those completes?
From: Python-list [mailto:python-list-bounces+david.raymond=tomtom.com at python.org] On Behalf Of Bob van der Poel
Sent: Thursday, May 23, 2019 2:40 PM
Subject: More CPUs doen't equal more speed
I've got a short script that loops though a number of files and processes
them one at a time. I had a bit of time today and figured I'd rewrite the
script to process the files 4 at a time by using 4 different instances of
python. My basic loop is:
for i in range(0, len(filelist), CPU_COUNT):
for z in range(i, i+CPU_COUNT):
With the function doit() calling up the program to do the lifting. Setting
CPU_COUNT to 1 or 5 (I have 6 cores) makes no difference in total speed.
I'm processing about 1200 files and my total duration is around 2 minutes.
No matter how many cores I use the total is within a 5 second range.
This is not a big deal ... but I really thought that throwing more
processors at a problem was a wonderful thing :) I figure that the cost of
loading the python libraries and my source file and writing it out are
pretty much i/o bound, but that is just a guess.
Maybe I need to set my sights on bigger, slower programs to see a
**** Listen to my FREE CD at http://www.mellowood.ca/music/cedars ****
Bob van der Poel ** Wynndel, British Columbia, CANADA **
EMAIL: bob at mellowood.ca