A very costly thing in say cloth simulations is say you make some settings, arrange the set-up, and run the simulation and find say about 60 frames later the mesh folded in on itself in a way that is not good. You tweak some settings or the start position and start again. Only to find its folded in on itself again and again. Eventually you get it right. However, much time is wasted waiting for the simulation to get to the point where you'd need to check a particular point. If it remakes the cache every time you make a tweak and you simulate at about 2 frames a second your waiting half a minute to get to a point to find out you need to re-tweak it each time.
Are there any ways in which to optimize utilizing computer resources to handle simulations to avoid having to wait so long? Other than say self-collision, quality steps, and number of vertexes? I noticed that the physics and cloth simulation do not seem to utilize GPU resource at all. It seems purely CPU driven and even this does not seem to tax your computer resource as much as it might be able to adjust.
An example is say one simulation the cache uses ~20% CPU utilization and caps at that for say a steady 2 fps for one simulation. Another simulation with simply more vertexes uses more CPU usage, but still seems steady at 2 fps.
I noticed that the physics and cloth simulation do not seem to utilize GPU resource at all. It seems purely CPU driven and even this does not seem to tax your computer resource as much as it might be able to adjust.
If I remember right, cloth simulation is only single threaded, which would account for why it's only using 20%. It's a known limitation that they'll work on eventually but I'm not sure when.
To speed things up, try placing a subsurf modifier above the cloth modifier in the stack. Turn it off to test with a lower poly mesh, and then when that looks good, turn it on and bake the simulation.
Thanks for that tidbit about the threading. I had not thought about that being the restriction at all. Sure enough looking around the net on that topic it seems as if other users have provided back feedback on physics simulations being highly serial in nature and not lending itself well to parallel computation.
I did some experiments with a simple high polygon count plane over a cube for a cloth simulation. I set Blender to operate on only on only a single core or allowed multiple cores. I can clearly see system utilization spiking due to the calculations across multiple cores happening in tandem or simply being throttled due to a single core when Blender was set to work with only 1 core.
It seems like Blender will take advantage of a multiple cores if possible (being even more efficient than a single core) (at least for cloth simulation), but it seems like in the end you can only split so much in parallel before its all simply stopped by the sheer number of calculations needed to be done serially.
I'll have to play around with the subsurf more in future then. I remember doing it a bit and a subsurf applied before certainly looked more realistic than one after wards.