That sounds like a fantastic idea!
I am not an animator and I do not know sign language, but my first idea would be to make every hand position as Shape Keys, so you could transition between any two hand positions...
If you already know the hand positions, than you'd be 'finished' at this point (more or less, it wouldn't be a beautiful animation, but practically usable, I'd say).
If you want to be able to convert any written language to sign language, that would be a lot harder (almost like an automated lipsync...).
Don't even think about spoken text to sign language, they can't even do subtitling this way (at least not without getting 30% wrong).
HI Micke84. There are no subtitles on this Rig demo but there are on the animation courses.
spikeyxxx is on the right track with his suggestion, but rather than using 'shape keys' which is something that on a mesh, if your character was rigged, you could use a Pose Library on the rig. (I think that is what he meant though)
The only issue is that a Pose Library is just a single pose (no motion) where as with my limited knowledge of Auslan (Australian Sign Language), most of the words actually have movement.
So maybe one way to create your Blenlan (Blender Sign Language) would be to create a single animation action for each of the words you needed and then put that all together using the NLA and python.
(Yes it's a lot of work haha)
Thank you spikeyxxx & Wayne Dixon (waylow)
I want to try to minimize "a lot of work" to "get more done in no time". Who want to have "a lot of work" hahaha.
The thing with motion capture, gloves, face tracking must be the fastest and most effective for sign language.
We've got the HTC Vive 2.0 pro with hi5 gloves. It can load Unity3D and works fine. We need to convert these codes C # to Python so it can be read by Blender3D. The reason we want to transfer to Blender is to be able to make 3D fairy tales for children in sign language.
Any of you good at this? How much does it cost to get help converting from C # to Python?
Our office is now closed because of. Corona crisis. As soon as we are back after the Corona era we will continue with this. I am happy to plan with you how we can make it as good as possible for the 3D sign language.
Face tracking we take later for gloves. I think we do it manually in Blender but we have looked at Kinect X with points in the face as well. It can be a bit complicated to record both at the same time. But we first want to solve C # to Python so Unity3D fbx animation codes are transferred to Blender3D.
Not exactly what you are looking for , but you might want to watch this tutorial by CGMatter about facial motion capture with Blender: https://www.youtube.com/watch?v=uNK8S19OSmA
Converting C# code to Python doesn't sound too complicated, but I can't do it (and maybe it's even very hard...).
There are some CGCookie members that can code; maybe start a new thread with a more specific title towards what you want so it might attract the attention of a coder...
In the meantime stay safe!