Category: Motion Capture


I have used my model from a previous module to attach the data to that i have recorded from Ben in the mocap suit. But before hand i need a bound rigg applied to my model. But i needed it to be as close to the skeleton exported by the gypsy software. It is a very basic skeleton head, shoulders, arm, forearm etc, but no digits like fingers and toes, but most importantly these joints are labled correctly to what they are. Once thats bound to my character i exported the scene (the bound model and skeleton  ) as an FBX to later import to motionbuilder to attatch my rigg.

I have been trying to follow an online tutorial and i am finding it all a bit overwhelming. I can import my mocap performance BVH file fine, it dances around fine. But in the tutorial it said I need to tell the computer what parts are what, where the elbow is for example. But I couldn’t find this list of data, the tutors was in a drop down called optics but i have no such drop down. I tried looking in other files but no such luck, so far. For now i just skipped that part and had a look at what was needed next. A puppet, I tried to get my head around it, but i am really struggling. I scaled the puppet fine , but you need to tell it which limp is influenced by which. I tried it and it was a horrible distorted mess. I need to tell the program which joints are which before i can attach the puppet.

Mat had a look at it with me but no cigar. I think it is time to call it a day on mocap for this module, i need to research into it a lot more before i think about attaching the data to a character.

Advertisements

Mocap Data Recorded

Below are some screen recordings of the data collected from my session with the motion capture suit. You’ll have to forgive the quality, I had to sit with a camera in front of the screen as windows  didn’t have any software to screen record and I needed a way to demonstrate what I had been doing.

I opened the BVH files in MotionBuilder, they look great, there are lots of subtle movements recorded but there are plenty of areas that need a clean up. But for this module i think i need to stop there with the motion capture for now but i have been reading up a few things about it. Like what is clean up?

In the book ‘MoCap for Artists’ by Midori Kitagawa and Brian Windsor, it describe ‘ Cleaning data ‘ as removing Bad data or replacing data  with better data or creating data where there is none. But they describe this process as time consuming and advise to try again and use clean up as a last resort. Looking at the data I collected mainly the boxing recording ,seen at 1.40, When the skeleton punches the the punches are very low, where as Ben was punching quite high. Places like that i could apply data clean up and bring the arms up.

I think this came about in the first place because of the type of motion capture suite. The skeleton would slip and move with the movement of the actor and the gyros would be out of sink with the ‘north’ data and so giving and inaccurate recording of the actors movement.

Jumping

I asked Ben to jump for me and nothing really seemed to happen, it turns out that you have to switch on the ‘Jump’ ability on the motion capture software. But this isn’t a great jump, seen at 0.40, Ben jumps, the skeleton jumps side ways and stays in the air longer than Ben did. This could be because the gyros get thrown off by the sudden force of movement and can not detect when the actor has hit the floor again as we have not told the software where the floor is. If this was a camera driven mocap, you could triangulate and place markers telling the system where the floor is in the scene.

I think this is why there were problems with the sitting and crawling test seen at 2.45. The software doens’t no where the floor is and seems to try and keep the skeleton upright. The skeleton hands dont touch the floor.

So there seems some limitations to this particular Mocap system, it gets easily thrown off with not understanding of where the floor is in 3D space. It works really well with up right motions and you get some lovely smooth and subtle detail that you probably couldn’t get with key frame animating.

I asked Ben to do a couple of things; Walk, box, jump and throw. I asked Ben to box because the idea was that i was initially applying this data to my character Alexis the boxer and thought it would be nice to have her perform some characteristic moves. I also asked for 10 seconds of walking and throwing. The idea of the throwing was to later on compost in a ball or something like that for a bit of fun.

 

Playing with Mocap

Had an exciting day friday! got to play with the motion capture suite!!!! Ben kindly said he would ware it for me, so thanks to Ben Mayfield for taking the time out to do it, also it wasn’t the most comfortable of equipment. I did my best to right down notes and take everything in so i’m writing up what i have and what i remember, please bare in mind this is my first ever go with motion capture.

The suite is a Gypsy 6 exoskeleton from Animazoo.

The software Gypsy Sphinx comes with the suite.

The data is transmitted from the suite wirelessly to a radio receiver. It uses gyroscopes for working out the angle of movement.

Before the start you need the calibration cube.( Also known as a jigg)

This Calibration cube is a know quantity to the program, it knows how tall, deep and wide the cube it.

Then we get ben to stand in it and take a photo from the front and his right.

From this the software can tell how tall Ben is and calculate other measurements. When we imported these pictures the computer wasn’t having a great time calculating Ben’s data. But a quick levelling and crop in Photoshop sorted that. He needs to have his arms by his sides as a default stance. Then in the software you upload the actor file (photos) and tell the software what it the back left corner, top right corner, bottom right …etc by using the balls on the calibration cube. Unfortunately I didn’t have a screen shot so I quickly drew this on ben to demonstrate what it looked like. You then put the corresponding joints in place .

The right side of the actor is the green side of the skeleton. Always the right side of the actor! Once all the joints are in place you store the actors file. This means Ben has been calibrated! his hight and distance and position of the joints have been calculated and stored.

In my notes I have written down that the resistance valve works off the numerical data recorded from the rotational value of the joints. These resistance valves are on the joints of the suit on positioned on the actor. By measuring the degree of movement within these resistance valves and the gyroscope the software can work out where the actor is in 3D space and the slightest of movement.

I can’t remember the part of the program you open but you load the calibrated actor data to this live window and from there you need to set north. This tells the computer the default position of the actor and brings it back to the centre of the field. Then hit record and the actor can perform what is needed. Once finished stop the recording and save as a BVH file ready to open in MotionBuilder.

  • The process of recording live motion events, translating it into usable mathematical terms by tracking a number of key points in space over time and combining them to obtain a single 3D representation of the captured performance.
  • the captured subject is anything that exists in the real world and has motion.
  • these points should be piovet points or connections betweet rigid parts of the subject e.g joints of the actor
  • these points are known as potentiometers, sensors, markers
  • map the resulting data onto a 3D character

Different ways of capturing motion.

  • cameras that digitise different view of the performance
  • electromagnetic fields of ultra sound to track a group of sensors.
  • Potentiometers to determined the rotation of each link.
  • or technology with a combination of all to achieve real time tracking of unlimited number f key points

 

Image Metrics Faceware

As part of the motion capture side of the brief i wanted to take a little look at performance capture. The first software that came to mind was Image Metrics software Faceware. I saw their talk at 2011 Bradford Animation Festival and have a nice excuse to filter it in my studies.

http://www.image-metrics.com/jwplayer/player.swf

I have had so many problems actually getting to use the trial offered by their web and some problems with access to window computers as the plugin is windows only. But i did eventually get to use it, with help from my tutors and some very lovely people and ImageMetrics.

I have been E-mailing Jay from Faceware technologies and he has been incredibly patient and helpful and I would like to say thanks! He has informed me that they are working on an education program so I am excited to see what we will have access to.

Everything you need to test out their software is available!

The Character rigs can be found here – http://www.facewaretech.com/support/retargeter-support/faceware-facial-rig-library/

The processed performances – http://www.image-metrics.com/performance_library.php

and the plug in – http://www.image-metrics.com/Faceware-Software/Download

So you are all set to have ago. I used this tutorial that I had watched previously provided by Image metrics.

From this short tutorial I was able to grasp an understanding of the plugin very quickly. It is very easy and user friendly and the effect i got in just an hour were amazing. Suddenly this polygon head was alive and talking!

Here are my results!

I had about 2 hours play time with the Ilana Rig I covered the animation in the tutorial and continued to expand on some other facial movements. Two hours is no where near enough to get a perfect performance, but you can see how quickly you can start to get a really nice result!

So I’m not entirely sure how it works but its very clever.

The performance capture is marker less!

Then send off the footage to be analised by Faceware software.

And you will get sent back IMPD. This is the data that will link the facial movements of your actor onto your character rig. Now there is a lot more in between bits that i know i wouldn’t understand. But the part we as humans beings needs to do is tell the plug in at what point the eyes are looking up and match the fame to the character rig and apply! in seconds hundreds of key frames will be added. Now every time the actor looks up, the character looks up, it has gone all the way through the sequence and keyed the eyes in position. Then onto the next pose, eyes looking down for example, find a pose where the actor is looking down, adjust your character rig and apply etc. You go through all the major poses like this and you end up with a performance like the one i did.

I am hoping that i can get a performance of my own recorded and analyzed so I can apply it to my own character!

So 5 stars to the people at Faceware technologies.