RSS
 

Realtime Performance-Based Facial Animation (SIGGRAPH 2011)

26 Oct

SIGGRAPH 2011 Paper Video: This paper presents a system for performance-based character animation that enables any user to control the facial expressions of …
Video Rating: 4 / 5

 
 

Tags: , , , , ,

Leave a Reply

You must be logged in to post a comment.

  1. wwwneomixcombr

    October 26, 2013 at 8:20 pm

    hi! not ready yet? need buy a licence? for your software…. tks

     
  2. arkeya100

    October 26, 2013 at 8:32 pm

    Hi there! First of: Amazing job, congrats!
    Just a very quick question, How did you get this renderring with your “input scans” ??
    When I show the depth image it’s just? grayscale and with poor accuracy.
    Thanks a ton for your answer 🙂 Cheers

     
  3. ohmss006

    October 26, 2013 at 9:26 pm

    Hello again, so sorry for the very late reply, but how is it going? and how? did it go at Siggraph Asia 2011? 😀
    Also, I was wondering, in theory, could one potentially capture the facial animation using this method and apply it to any 3d head model??? because? I might want to do some tests.

    Please let me know, this stuff is really great! 😀

     
  4. ohmss006

    October 26, 2013 at 9:59 pm

    Hi agian, so? sorry for the very late reply, but how is it going? and how did it go at Siggraph Asia 2011? 😀

    Also, I was wondering, in theory, could one potentially capture the facial animation using this method and apply it to any 3d head model??? because I might want to do some tests.

     
  5. m3htrix

    October 26, 2013 at 10:44 pm

    While not? perfect, these are by far the most impressive results I have seen using a commercially available camera. Keep up the good work!

     
  6. justicefadingtoblack

    October 26, 2013 at 11:30 pm

    wondeful reseach, loved going through your article. and the results are? amazing. looking forward to see it integrated in a commercial application.

     
  7. Nicholas Vrana

    October 27, 2013 at 12:19 am

    We’re building software which does similar work (not for profit of course) in a lab at a university in Dallas. Any tips?? 🙂

     
  8. Hao Li

    October 27, 2013 at 12:58 am

    The current version is? with both OpenNI, before it was libfreenect, I believe this is the OpenKinect SDK, so you can use both

     
  9. Nicholas Vrana

    October 27, 2013 at 1:12 am

    Did you guys use the? Open Kinect SDK?

     
  10. Hao Li

    October 27, 2013 at 1:12 am

    @EnneagramVIdeo? hao@hao-li.com

     
  11. David Fauvre

    October 27, 2013 at 1:43 am

    What is your contact information? Thank you.?

     
  12. Hao Li

    October 27, 2013 at 2:19 am

    Yeah, we are going to commercialize an independent software and also Maya plugin very soon. We will showcase the product at? the Emerging technologies at Siggraph Asia 2011.

     
  13. ohmss006

    October 27, 2013 at 2:34 am

    Hi,? I would like to ask (again), does anyone know what is happening with this at the moment??? Is there an application or something?

     
  14. ohmss006

    October 27, 2013 at 2:42 am

    Hi, I would like? to ask if you dont mind me, what is happening with this at the moment???

     
  15. ChaosKaiser

    October 27, 2013 at 2:53 am

    Seconded! This would make animation SO? much easier then fighting with getting vertex to look just right. Not to mention if coupled with a voice changer, could make for some comedic situations. XD

    Have a wrap around screen helmet under a cloak with a 3D animated scull rigged for pulling off emotions would be AWESOME at a Halloween party! XD

    So when would something like this be available? Or is it already? If so, where can I get it and will it cost a kidney or other various parts?

     
  16. wazackenzie

    October 27, 2013 at 3:17 am

    can this thing spit out half decent c3D data for facerobot? can i have? a copy.

     
  17. ILoveEpicMusic

    October 27, 2013 at 3:25 am

    2:35 -? 3:45 hahahahahhahahahahaahahha

     
  18. yanivcogan1

    October 27, 2013 at 4:14 am

    this +? yoostar 2

     
  19. ASLanimator

    October 27, 2013 at 4:28 am

    Hi, I’m a software developer for American Sign Language (ASL) materials. Your work on facial expression looks great. My past research on ASL motion capture showed a huge need for facial expression. Please let me know if you would be interested in developing the softare further. I’m looking? into using motion capture with Haptek avatars. I’m writing a grant, looking for collaborators.

     
  20. termi892

    October 27, 2013 at 4:59 am

    is? there going to be any publicly available mocap program?

    Its been awhile since kinect has been released and there is no good mocap program for it yet, although so many people continue to mention about how its so good to be used for mocap device.

    Theres Brekel, but the quality is pretty bad, Ive tried myself.
    Theres Ipisoft which costs 200$.
    No facial mocap tool available.

     
  21. Hao Li

    October 27, 2013 at 5:16 am

    There are several problems when using two kinects at once: one is synchronization, and the other one? is that the interference of patterns would decrease the quality. It doesn’t have to be kinect, can be any real-time 3D scanner.

     
  22. ben dorman

    October 27, 2013 at 5:54 am

    would there be a way? to use two kinects at once to help increase resolution or using a calibration method? I am seriously waiting for someone to release a 3D Studio Max version (or anything commercial) as I am a 3d freelancer. I already own iPisoft Mocap for the Kinect and that is great for body movement this would complement it very well. Nice vid and good luck at siggraph, looks to be a great year to go.

     
  23. held2012

    October 27, 2013 at 6:49 am

    wow =D?

     
  24. yanivcogan1

    October 27, 2013 at 7:40 am

    this +? yoostar 2 = awesome

     
  25. John Ang

    October 27, 2013 at 7:57 am

    Awesome?