Hello!
I am using Putty on Windows to SSH into my school's high processing power computer. The computer is in Linux, so I downloaded the Linux version of Blender onto it, copied over my .blend file, and it actually rendered successfully! In short, without being on campus - I am able to have my computer give directions to the school machine, and have it do all the work.
It must be multi-threading because "top" tells me I am using 1400% of the CPU. Wow. It's still taking a long time to run, but that's probably because of my thousands of fur particles.
Is CPU computing going to be the best I can get? Or is there a way to make Blender render on the GPU from the command line?
I am using this on the Linux machine:
./blender -b file.blend -E CYCLES -f 1 -x 1 -o //render -a
However, it is rendering on the CPU even though the .blend file's settings should be forcing it to render on GPU. Perhaps because I have it set in Blender user preferences to my GPU, and yet it will not allow me to select anything else. So I can't set it to the remote GPU.
Any ideas? I want to use network render for blender addon instead of this, but I can't figure out how to make the school computer a slave without having the user interface. With only the command line, not sure what to do. I know the network render addon allows you to specify GPU use on slave machines. However, using this addon is too difficult to figure out, so I am just running blender on the school machine without the addon. But that requires installation of the files, etc.
Ask and you shall receive!
I am pretty sure rendering through the command line ignores the CUDA/OpenCL GPU rendering from the user-config, and for a few reasons. One, cycles rendering is actually an addon, and not built-in to Blender core 2.6. Because of this I am fairly positive that the GPU rendering feature itself is initialized when the UI is started, and something not required by the CLI rendering command. Because of this you'll find the CLI GPU options for OpenGL (i.e. GPU debug).
Even so, there are always ways to push the process directly to the GPU, but to receive the same CUDA performance is a whole other thing. I am not sure what today's nVidia Linux drivers provide (assuming that is what is even being used on that system), but it's worth checking out.
To make find absolute answers, make sure that your BLENDER_USER_CONFIG environment variable is in fact pointing to the same directory as your Blender config, and turn on all of the debug/verbose options with your render command. You should at the very least see what Blender sees, and doesn't like, or doesn't support.
Best of luck!
Thank you so much for the thorough reply! I'll try it out. If it doesn't work, I'll probably just stick to using github to push my changes to the Linux hard drive and run it from there with whatever settings. The CPU rendering is awesome since there are so many cores in this machine, and it's completely better than sitting and waiting for my home computer to render. I can delegate it to the school, and keep working on my own thing. At least my excessive student tech fees are getting me something. ;)