Click to Refresh Forum
Click to refresh Forum
Jump to content
Animation:Master

Bendytoons

Forum Members
  • Posts

    517
  • Joined

  • Last visited

Everything posted by Bendytoons

  1. That looks great. The stutter in tracking is probably because your phone couldn't keep up a regular frame rate. I've found that using a miniDV camcorder and then capturing the footage gives the best results because the frame rate is rock solid. about $400 bucks last I checked, a steal when you consider that competing (though not competitive to my mind) products go as high as 20K.
  2. I can't speak for Zign Track, but my face capture experiments included both eye and blink tracking. Eye tracking can be done by trackin the iris as a marker. The results are mediocre at best because the marker is very large in relation to the movement, but you do get some of the natural eye jitter. I tracked blinks by putting a small piece of colored tape on my eyelash and measuring its relationship to a marker under my eye. Again, the track is kind of rough, but better than nothing.
  3. Luuk, That is beautiful. Really nice fidelity, even when turning from the camera. My experiments have been on hold for many months, but I'd love to swap notes. Bendy
  4. This is a great idea. I recall it used to be possible to select a spline on model as a path constraint target, but now I can only select a spline made in the chor with the path tool. What's gone wrong here? Hold down the shift key when choosing the spline, according to this thread: Creating a path, can it be done in a model?
  5. Basically- yes. You have to set up the puppet to work with the system. That involves adding a specific set of bones that hold the data, and using those bones to drive other bones and poses. So there is always some set up for a new puppet, but once set up you can just drop the action on and go.
  6. this is the second piece with my animated self-portrait. Mostly it was a test to see how easily a second mo-cap session could be dropped on the puppet. My approach tries to do all adjustment on the puppet, rather than the mo-cap data. The goal is to just drop the action on the character and be 95% done. The reality was only about 85%, but that's not bad for a first shot. Here is a link to the piece on Youtube: As with the last piece ( ), I hope to post this on A:M films, but I'm waiting to hear back from HASH about that. Ben
  7. I am not insulted, and it is possible you are having lag issues with Youtube, I do sometimes. Having said that, this is mo-cap driven, so the movement is in proper relation to the sound. The kind of anticipation you are talking about is built in. As I mentioned in an earlier post the way to manipulate this is to adjust the puppet, which I will do on future sessions. BTW, 3 or 4 frames seems like too much lag, I usually do 1 or 2, but if it works... B
  8. Thanks for the kudos, everyone. If I can figure out how to market it I will. I did write the poem. Paul, That's easy to do, but I think unneeded. The brows and eyes are doing a good job of mimicking my performance, but the performance was pretty lame. My goal here is to create something that needs as little hand adjustment as possible. I've designed it so that all the tweekage happens on the puppet, so if you want more brow action you adjust the curve of the brow control or the brow poses. the idea is that you'll be able to just drop the capture onto any rigged puppet, and have a decent performance.
  9. The final (for now) version of this piece is now posted on YouTube at: I'll try to have it up on A:M Films soon as well. Ben
  10. Literally, sealing wax is the wax dripped on a document and stamped with a seal. However, it probably has other connotations I'm not aware of.
  11. Luuk, You might try two emitters, with different settings, in the same place. The second emitter can be set to obscure the striping of the first and vice versa. It's a low tech solution but has worked for me in other situations with sprites. Ben
  12. I noticed this, and posted a similar question a couple of months ago. Then I figured the problem out and found an answer. The problem is that A:M only supports project frame rates that are whole integers. It can place keys between frames but will only render a whole number of frames per second (Hash inc., if I have this wrong forgive me). The solution is to change the frame rate of the footage in Syntheyes to 30fps. It will then export at that frame rate, and will import into A:M beautifully. It doesn't matter that you change frame rate because When you later compile your individual frames back into your movie you can change it back in your editing package. You can see Syntheyes export in action in my current "animated self portrait" wip thread, and in earlier wip threads with the "bug" character. Ben
  13. Vance, Good catch. Though I have to say I prefer the term "high strung", psychotic has such negative connotations
  14. Here is a 45 degree(ish) clip of the same bit. The inside of the mouth looks a little funny because I have a non shadow casting light, and it's not completely textured. shot2.mov
  15. Paul, The capture is all post processed. I'd love to play with live capture again, someday, but don't have 20k lying around for a system. this stuff was all done with Syntheyes.
  16. Yeah, the eyes are looking around. I'm afraid my performance was a little distracted. I will post another angle soon. As far as blinks go, they are actually tracked and in the performance, just not the snippet I posted. It's certainly not totally mimicking the live performance yet, but does get some nuance. The head "freezing" is mostly my performance I suspect, though there are still some glaring issues with the head and neck capture. I'll post more tests as I have them.
  17. I have been pretty quiet recently because I've been working hard on virtual me. I wanted to experiment with puppeting a realistic character, and I figured I was the best reference available. The model is not quite as elegantly splined as some of the other self portraits that have been posted recently, but my focus was on movement. So here's a first test at virtual me. There are still some tweaks to be made in both the motion and the model, but this is a promising start. Sorry it's so short, but I had to fit in the 2mb limit take20.mov
  18. Choreography actions by design last the length of the choreography. If you want actions that are less than that just use regular actions. However, it sounds like your problems mostly rise from having the second action's blending set to "replace", which is the default. Try changing it to "add", this makes each new choreography action a layer that adds to the layers beneath it. You can also animate the influence percentage for each action. I would also suggest your basic premise is incomplete. Multiple choreography actions can be used to "better structure a scene", but they can also be used to tweak animation very precisely and non destructively. A:M has the best non-linear animation (reusable motion) tools I have ever used. The ease of layering, re timing, and reusing animation is unbeatable. This is where the real power of multiple choreography actions shows. B
  19. Hmm. That's strange.. Using v 13? I am using 13s. I just ran the test again, with different results. basically I get no difference no matter what the setting: Processpor 0- 18 sec Processor 1- 17 sec Processor 0&1- 17 sec My conclusion is that for me it makes no difference, though I can imagine if you ran many apps at once it might.
  20. This has the opposite effect than that described on my machine. I set affinity to processor 1 only and it slows down about 33% compared to being set to both 0 and 1.
  21. The "ease " value defines where on the path the object is. You can set keyframes however you want, but by default it uses the length of the chor.
  22. Looks good. You're a man after my own heart.
  23. Nice. When you make a storyboard into a video it's called an animatic.
  24. I'd swear i visited that camp at BurningMan 2004.
  25. Surprisingly, Bug was not the only one to try out for this part. Here is a fragment of the monkey trying to play against type. Mostly this was a test to see how transferring between models would work. The monkey is old and wasn't originally designed for this, and yet he performed quite well. bug_tryout.mov
×
×
  • Create New...