Category: OUDF 203 Negotiated studies

Mocap Data Recorded

Below are some screen recordings of the data collected from my session with the motion capture suit. You’ll have to forgive the quality, I had to sit with a camera in front of the screen as windows  didn’t have any software to screen record and I needed a way to demonstrate what I had been doing.

I opened the BVH files in MotionBuilder, they look great, there are lots of subtle movements recorded but there are plenty of areas that need a clean up. But for this module i think i need to stop there with the motion capture for now but i have been reading up a few things about it. Like what is clean up?

In the book ‘MoCap for Artists’ by Midori Kitagawa and Brian Windsor, it describe ‘ Cleaning data ‘ as removing Bad data or replacing data  with better data or creating data where there is none. But they describe this process as time consuming and advise to try again and use clean up as a last resort. Looking at the data I collected mainly the boxing recording ,seen at 1.40, When the skeleton punches the the punches are very low, where as Ben was punching quite high. Places like that i could apply data clean up and bring the arms up.

I think this came about in the first place because of the type of motion capture suite. The skeleton would slip and move with the movement of the actor and the gyros would be out of sink with the ‘north’ data and so giving and inaccurate recording of the actors movement.


I asked Ben to jump for me and nothing really seemed to happen, it turns out that you have to switch on the ‘Jump’ ability on the motion capture software. But this isn’t a great jump, seen at 0.40, Ben jumps, the skeleton jumps side ways and stays in the air longer than Ben did. This could be because the gyros get thrown off by the sudden force of movement and can not detect when the actor has hit the floor again as we have not told the software where the floor is. If this was a camera driven mocap, you could triangulate and place markers telling the system where the floor is in the scene.

I think this is why there were problems with the sitting and crawling test seen at 2.45. The software doens’t no where the floor is and seems to try and keep the skeleton upright. The skeleton hands dont touch the floor.

So there seems some limitations to this particular Mocap system, it gets easily thrown off with not understanding of where the floor is in 3D space. It works really well with up right motions and you get some lovely smooth and subtle detail that you probably couldn’t get with key frame animating.

I asked Ben to do a couple of things; Walk, box, jump and throw. I asked Ben to box because the idea was that i was initially applying this data to my character Alexis the boxer and thought it would be nice to have her perform some characteristic moves. I also asked for 10 seconds of walking and throwing. The idea of the throwing was to later on compost in a ball or something like that for a bit of fun.



Third time lucky

So my indipendant track went well… the third time round. I was having trouble getting a good camera solve from matchmover. I was getting very frustrated as both tutor Matt and friend Andy both had a go at the same footage and got a really good solve. I tried it 3 times and on the third time i got it. but i am not sure what i was doing different. I have a feeling that it could have been the bounding box around the track. I just kept re-sizing them and everntually i got a good , mostly green solve. After inputing the co-ords etc i opened it in maya and dropped a cube in the scene.

With this play blast you can see it is a really nice track. The cube looks firmly set on the table, i am very happy with that Pretty confident that i will be able to do it again.

After trying to mask out the tracking markers and failing (explained in a previous post) i decided to simply place some cubes over each one to cover them up. A bit sneaky i know but it worked nicely.

I did two render passes in maya. One of the shadows only and one of the blocks only, with a 2 pixel blur. Annabeth suggested bluring it in Maya as it would be a 3D blur rather than adding a 2D one in AfterEffects.

By rendering out the shadows separately this allowed me to soften them in an AfterEffect layer to match them with the lighting in the footage, which I think has worked quite nicely. They look very much like they are on the table. Really pleased and will be using this as 5 seconds of my 10 second motion tracking demonstration.

So the more I learn about digital film, games and animation, the more I spot and analyse stuff on the telly and ads. I spotted this late rooms add ,

And immediately thought I know how they did that!! Well i know how i could do that with what I have learnt about tracking. That could have easily been done using point track in AfterEffects.

I found a good tutorial from video copilot on corner pin track and point tracking and the point track part of the tutorial is pretty much how the advert did it.

I feel totally out of my depth at the minute. This motion capture stuff is so complicated and i’m having such a difficult time installing the Faceware trial. I knew that the motion capture was going to be complicated but I only wish I had tackled this first. All I want to do is get 10 Seconds of motion capture data onto my model, but anyone in the industry would say Only 10 seconds?!. Time time time, I  know it takes time but time is running out, something has got to give. I’m feeling the practical Faceware test is going to have to be saved for another time, i just cant get it working and the worst part of it, I wanted to use it for my end of year presentation! I have their beautiful facial rig maybe I can do something with that for my presentation.

It’s my own fault again and I wont learn. I just want to do and learn so much that I think I am overwhelming myself.

So Faceware test has gone, crossed off my list for another day. Just need to focus on my large motion track and my 10 sec demonstration of motion capture data on my model. The only other thing in my brief was looking into Xbox kinnect hacking. I never intended to actually hack and use the Xbox kinnect but do some research into it. Plus there are the crazy amount of blog posts i still want to do…best get cracking.

So I spent a couple of hours making a simple but interesting shape to put into my large track test and this is what i can up with.

It still needs a little work, i would like a nice shiny metal texture so i can get a nice environment map to reflect off it.

This is the first time i have used mental ray render settings and so far it looks good. But i will be calling on Andy to help me get a better effect as he is pretty good with mental ray settings.

I think i will just sit the shape in the scene, so i need to level out one of the sides of the cube so it sits level.

Large track prep

So Andy and I are doing this one together. Achieving a track with a large open space. For me I want to put a simple metallic shape in the large space take and envrinoment map for realistic reflections.

So the plan is to do it this Tuesday, the 15th. So whats the weather going to be like?

We are planning on filming in the morning, possibly on a few locations, first stop Millennium square Leeds. We might be lucky and miss the rain but if not, just means another challege and AfterEffects has a rain effect.

I’m going to give Leeds council a quick ring, just to ask permission/let them know what we are doing incase it attracts any attention. I’ll be taking my surveyors tape measure to take down the hight from the floor to the camera and from the camera to the position I want to put my shape in. We also need the focal length and film back.

Fingers crossed things go smoothly.


Had an exciting day friday! got to play with the motion capture suite!!!! Ben kindly said he would ware it for me, so thanks to Ben Mayfield for taking the time out to do it, also it wasn’t the most comfortable of equipment. I did my best to right down notes and take everything in so i’m writing up what i have and what i remember, please bare in mind this is my first ever go with motion capture.

The suite is a Gypsy 6 exoskeleton from Animazoo.

The software Gypsy Sphinx comes with the suite.

The data is transmitted from the suite wirelessly to a radio receiver. It uses gyroscopes for working out the angle of movement.

Before the start you need the calibration cube.( Also known as a jigg)

This Calibration cube is a know quantity to the program, it knows how tall, deep and wide the cube it.

Then we get ben to stand in it and take a photo from the front and his right.

From this the software can tell how tall Ben is and calculate other measurements. When we imported these pictures the computer wasn’t having a great time calculating Ben’s data. But a quick levelling and crop in Photoshop sorted that. He needs to have his arms by his sides as a default stance. Then in the software you upload the actor file (photos) and tell the software what it the back left corner, top right corner, bottom right …etc by using the balls on the calibration cube. Unfortunately I didn’t have a screen shot so I quickly drew this on ben to demonstrate what it looked like. You then put the corresponding joints in place .

The right side of the actor is the green side of the skeleton. Always the right side of the actor! Once all the joints are in place you store the actors file. This means Ben has been calibrated! his hight and distance and position of the joints have been calculated and stored.

In my notes I have written down that the resistance valve works off the numerical data recorded from the rotational value of the joints. These resistance valves are on the joints of the suit on positioned on the actor. By measuring the degree of movement within these resistance valves and the gyroscope the software can work out where the actor is in 3D space and the slightest of movement.

I can’t remember the part of the program you open but you load the calibrated actor data to this live window and from there you need to set north. This tells the computer the default position of the actor and brings it back to the centre of the field. Then hit record and the actor can perform what is needed. Once finished stop the recording and save as a BVH file ready to open in MotionBuilder.

  • The process of recording live motion events, translating it into usable mathematical terms by tracking a number of key points in space over time and combining them to obtain a single 3D representation of the captured performance.
  • the captured subject is anything that exists in the real world and has motion.
  • these points should be piovet points or connections betweet rigid parts of the subject e.g joints of the actor
  • these points are known as potentiometers, sensors, markers
  • map the resulting data onto a 3D character

Different ways of capturing motion.

  • cameras that digitise different view of the performance
  • electromagnetic fields of ultra sound to track a group of sensors.
  • Potentiometers to determined the rotation of each link.
  • or technology with a combination of all to achieve real time tracking of unlimited number f key points


I have sort of taken things for granted up until now. Two of the programs I wanted to use are not actually available on MAC OS. I need MotionBuilder to investigate the motion capture software and also the Image Metrics software Faceware but both those programs are windows!!! I should have investigated it before hand but kind of just assumed they were on MAC. But its not the end of the world, there is a windows laptop at college and windows machines in the mez. So i should be able to look into the programs enough to full fill the brief.

But this makes me think about what tools i will need to hand after college, having a Mac and Windows operating system would be a great advantage and release any worries about compatibility problems.

Covering my tracks

I wanted to do a track with some HD footage that i could try doing independently and wanted to be able to remove the tracks from the footage.

I liked my spinning top and wanted to use it in some HD footage and feel it looks really good. But i have been having so much trouble removing the tracking dots.

I have tried several different ways;

The clone tool made such a smudgy mess, not really sure how, towards the end I ended up with double the amount of tracks I started with. I haven’t completely understood how it works and I find that I works best with programs and tools when i understand how they work. It might just take me a little longer to grasp this one.

I tried a perspective corner pin track with a cropped piece of the table I took from one of the png sequence’s of the footage.

But even with some colour correction it didn’t work at all. There are some reflections that change on the table and the pannel is really obvious.

I tried 2 ways of masking. I used one frame from the PNG sequence and used the clone tool in photoshop the remove the tracking dots. Then through keyframing the rotation and position roughly simulated the path using the cross of all four tables meeting. Then used an animated mask on that layer on the top of the tracking dots.

It seems to sort of work, but the colour changes too much in the table for any feathering to blend inconspicuously.

I also tried individually masking out the spots.

I tried this time duplicating the video layer but there would be points were the tracking dots on the masked layer would be to close to the dots on the base layer. In fact towards the end one of the dots un avoidably appear. Again i don’t really understand why to correct it but even so it didn’t really work either.

It was the same problem as the other masking test. The light doesn’t fit and also with this mask and the last the footage doesn’t move similarly to the base footage at all.

So as you can imaging this is very frutrating. So i took the advice of some friends and E-mailed a few VFX companies to see if they could help, to see how they would takle it. I haven’t heard back yet but it has only been a few days.